


Will GPT be cool? Buffett, Musk and other big names call for suspension
ChatGPT quickly “popularized” the world and has undoubtedly become a high-profile “star product” in the field of artificial intelligence. However, as ChatGPT becomes more and more widely used, it has been reported that it has been used for academic fraud, making hacker weapons, and leaking users. Sensitive chat information and a series of negative news. At this point, society has begun to re-examine artificial intelligence technologies like ChatGPT.
Compared with the intelligent robots that have served the industrial field in the past, they can only replace humans in complex, mechanical, and dangerous manual labor, similar to ChatGPT artificial intelligence Intelligent technology has stronger dialogueability, flexibility and good "independent thinking ability", which has reshaped society's understanding of the field of artificial intelligence. At present, the artificial intelligence products released by technology giants have been integrated into various types of industries such as writing, programming, and painting, triggering a new round of technological changes.
It is undisputed that once artificial intelligence technology like Chat GPT is fully developed, it will be able to serve society well and benefit mankind. At the same time, Chat GPT artificial intelligence technology is also a "double-edged sword". Its security issues, compliance issues and impact on the work distribution of social personnel have attracted the attention of many industry leaders and countries.
Many celebrities call for consideration of the research and development of artificial intelligence similar to ChatGPT
When the whole society is immersed in the "fantasy" that artificial intelligence technology similar to ChatGPT will set off the fourth industrial revolution, many in the technology community Big bosses put forward conflicting opinions and even expressed concerns about the rapid development of ChatGPT-like artificial intelligence technology.
Buffett expressed doubts about whether artificial intelligence technology is beneficial to society
Recently, Buffett, the godfather of the investment industry, talked about his thoughts on the rapid development of ChatGPT-like artificial intelligence in an interview with the media. Buffett told reporters that there is no doubt that artificial intelligence technology has indeed made incredible progress in terms of technical capabilities, but based on its impact on society as a whole, there are currently no large-scale experimental results to support that the development of artificial intelligence is beneficial to mankind. From this point of view, the development of artificial intelligence should be cautious and rational.
Buffett’s concerns are not unreasonable. The rapid popularity of ChatGPT has covered up the security, compliance and other issues behind it. Human beings’ direct experience of this type of technology is only that it is advanced and efficient, and they do not understand its potential threats. profound.
Judging from the information currently disclosed by the media, there is already evidence that some hackers are using artificial intelligence products like ChatGPT to write malicious code and phishing Attack emails, not to mention the security issues of ChatGPT itself. (Previously, Samsung had information leaked due to employees training data on ChatGPT; ChatGPT user chat information list leaked, etc.).
Musk jointly called for the suspension of training of more powerful artificial intelligence systems
Before Buffett expressed concerns about the development of artificial intelligence technology, Tesla founder Musk jointly issued a statement with thousands of technology people The open letter calls for a moratorium on training artificial intelligence systems more powerful than GPT-4.
On March 29, Musk took action and signed an open letter signed by thousands of industry and academic celebrities, calling on all artificial intelligence laboratories to suspend training for at least 6 months to become more powerful than GPT-4. AI systems to develop and enforce security protocols.
Regarding this move, Musk said that considering the strong technical capabilities demonstrated by ChatGPT, its security issues need to be supervised. In addition, Elon Musk has repeatedly emphasized that artificial intelligence has powerful capabilities but also comes with huge risks. It is undoubtedly a double-edged sword. He even pessimistically believes that artificial intelligence technology is one of the biggest risks facing human civilization in the future.
It is worth mentioning that unlike Buffett, Musk and others have a "negative" attitude towards the development of artificial intelligence technology. Bill Gates, the former world's richest man, Ci is relatively optimistic about this and said that suspending development cannot completely solve the problem. Figuring out which methods to adopt and rationally utilizing the development of artificial intelligence technology is the best solution.
Countries seriously examine the development of ChatGPT-like artificial intelligence technology
Many movies, TV series, and novels have imagined artificial intelligence robots serving society and benefiting mankind, and they have also questioned the safety and reliability of artificial intelligence. , loyalty, and whether to replace humans to rule the world. Products like ChatGPT are slowly pushing this idea towards reality. When humans begin to truly face the "double-edged sword" of the development of artificial intelligence, they need to seriously consider it. Security, compliance and other issues.
Italy fired the “first shot” on ChatGPT’s security issues.
More than ten days ago, the Italian government banned the operation of Chat GPT in Italy on the grounds that OpenAI illegally collected a large amount of personal data of Italian users and failed to establish a mechanism to check the age of ChatGPT users to prevent minors from accessing illegal materials. . After a few days of buffering, the Italian data protection agency made slight concessions and proposed a series of requirements to OpenAI. If they can be met, Italy will allow ChatGPT to continue operating in the country.
At this point, Italy has fired the “first shot” on ChatGPT security compliance issues.
The security measures taken by Italy on ChatGPT have attracted the attention of European countries. Currently, European countries are studying whether they need to adopt strict restrictive measures on artificial intelligence technologies similar to ChatGPT. Among them, the Spanish data protection agency has asked the EU privacy regulator to assess privacy compliance issues surrounding the use of ChatGPT.
Immediately afterwards, German regulatory agencies also announced a ban on the use of ChatGPT, and European countries such as France, Ireland, and Spain also began to consider adopting stricter supervision of AI chatbots.
The United States follows closely the EU and plans to regulate artificial intelligence
With one stone, Italy has strengthened its supervision of ChatGPT, which has not only aroused the EU countries’ concerns about artificial intelligence. Worries also affect the United States on the other side of the ocean.
On April 11, local time in the United States, the "Wall Street Journal" suddenly broke the news that because American society is concerned about the security threats that artificial intelligence technologies such as ChatGPT may bring, the Biden administration has begun to study whether it is necessary to control such tools. Make restrictions.
Following this, the U.S. Department of Commerce officially solicited opinions on related accountability measures on April 11 (the period for soliciting opinions is 60 days, including the release of new artificial intelligence models with potential risks). Should there be an approval and certification process? This move is seen as the first step in the potential regulation of artificial intelligence technology in the United States.
Regarding the urgent need to set rules to regulate the development of artificial intelligence technologies like ChatGPT, Alan Davidson, director of the National Telecommunications and Information Administration, an agency under the U.S. Department of Commerce, pointed out that given that artificial intelligence technology is only in its initial stage, "Advanced", the government needs to consider potential criminals using this technology to carry out criminal activities, so it has to set some necessary boundaries.
The Chinese government issued management measures in the field of artificial intelligence
Not only the United States, but also my country has begun to tighten research and development in the field of human intelligence. On April 11, the Cyberspace Administration of China released the "Management Measures for Generative Artificial Intelligence Services (Draft for Comments)" (hereinafter referred to as the "Management Measures") on the grounds of promoting the healthy development and standardized application of generative artificial intelligence technology. It is reported that the "Management Measures" include 21 items in total, among which "generative artificial intelligence" includes technology that generates text, pictures, sounds, videos, codes and other content based on algorithms, models, and rules.
The "Administrative Measures" restrict the division of responsibilities for information generated by the application of artificial intelligence technology, and clearly point out that organizations and individuals who use generative artificial intelligence products to provide services such as chat and text, image, and sound generation, including through the provision of Programmable interfaces and other methods allow others to generate text, images, sounds, etc. on their own, and bear the responsibility of the producer of the content generated by the product; if personal information is involved, bear the legal responsibility of the personal information processor and fulfill the personal information protection obligations.
In addition, manufacturers of artificial intelligence products have also put forward requirements. The "Administrative Measures" stipulate that before products are launched on the market, they should report to the state in accordance with the "Regulations on Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilization Capabilities". The cybersecurity and informatization department shall apply for a security assessment and perform algorithm filing, change, and cancellation filing procedures in accordance with the "Regulations on the Management of Algorithm Recommendations for Internet Information Services".
When talking about artificial intelligence products like ChatGPT at the social level, they are more concerned about the data used by manufacturers to train artificial intelligence products. This point is also highlighted in the "Management Measures", which clearly points out that for generative artificial intelligence Pre-training and optimized training data for smart products should meet the following requirements:
- (1) Meet the requirements of laws and regulations such as the "Cybersecurity Law of the People's Republic of China";
- (2) ) does not contain content that infringes on intellectual property rights;
- (3) If the data contains personal information, the consent of the personal information subject must be obtained or other circumstances that comply with laws and administrative regulations;
- (4) ) can ensure the authenticity, accuracy, objectivity, and diversity of data;
- (5) Other regulatory requirements of the national cyberspace department on generative artificial intelligence services.
Finally, the "Management Measures" emphasize that for generated content found during operation and reported by users that does not meet the requirements of these Measures, in addition to taking measures such as content filtering, it should be optimized through the model within 3 months Training and other methods to prevent regeneration.
At present, after a large number of training iterations, artificial intelligence technologies like ChatGPT are developing rapidly. On the contrary, the existing legal systems in various countries are obviously not complete enough to supervise artificial intelligence technologies like ChatGPT. Therefore, regulations have been formulated to regulate artificial intelligence technologies. The development has become the primary problem that all countries need to solve.
Conclusion
Computing technology is developing rapidly, and the wave of artificial intelligence technology revolution is inevitable, and some "artificial intelligence" will inevitably be replaced in the future. In addition, the large number of applications of artificial intelligence technologies like ChatGPT will definitely lead to security issues such as data leakage, optimization of network hacking tools, large-scale fraud, and AI fictitious pictures.
However, none of the above factors can be an excuse to suspend the research and development of artificial intelligence technology. Human beings should not face the arrival of artificial intelligence with fear. We must always remember that "people" are the foundation of the development of artificial intelligence technology. I believe that in the future, through continuous optimization of technology and formulation of regulations, the harm caused by "derivatives" can be minimized.
The above is the detailed content of Will GPT be cool? Buffett, Musk and other big names call for suspension. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G
