News on March 15th, local time on Tuesday, Artificial IntelligenceResearch LaboratoryOpenAIreleased the latest version of the large language model GPT-4. This long-awaited advanced tool can not only automatically generate text, but also describe and analyze image content; it not only pushes up the technical level of the artificial intelligence wave, but also makes the ethical boundaries of technology development increasingly difficult to ignore.
In contrast, the state-of-the-art GPT-4 model is not only able to automatically generate text but also describe images based on simple user requests. For example, when GPT-4 was shown a photo of a boxing glove hanging on a wooden seesaw with a ball on one end, and the user asked what would happen if the glove fell off, GPT-4 would respond that it would hit the seesaw, sending the ball flying. . Early testers have claimed that GPT-4 is very advanced in its ability to reason and learn new things. The technology will further revolutionize people's work and lives, developers said on Tuesday. But it also raises public concerns about how humans can compete with such horribly sophisticated machines, and how people can trust what they see online. OpenAI executives said that GPT-4’s “multi-modality” across text and images makes it far superior to ChatGPT in “advanced reasoning capabilities.” The company delayed the release of GPT-4's image description function due to concerns about misuse of the function, and subscribers to the ChatGPT Plus service supported by GPT-4 can only use the text function. OpenAI policy researcher Sandhini Agarwal said the company has not released this feature yet to better understand the potential risks. OpenAI spokesperson Niko Felix said the company is planning to "implement safeguards to prevent personal information in images from being identified." OpenAI also admitted that GPT-4 will still have common mistakes such as "generating hallucinations," making nonsense, perpetuating social prejudices, and providing poor advice. Microsoft has invested billions of dollars in OpenAI, hoping that artificial intelligence technology will become the killer feature of its office software, search engines and other online products. The company promotes the technology as a super-efficient partner that can handle repetitive tasks and allow people to focus on creative work, such as helping software developers complete the work of entire teams. But some people who are concerned about artificial intelligence say that these may be just symptoms, and artificial intelligence may lead to business models and risks that no one can predict. The rapid development of artificial intelligence, coupled with the popularity of ChatGPT, has led to fierce competition among companies in the industry to compete for dominance in the field of artificial intelligence and to release new software. This craze has also attracted a lot of criticism. Many believe these companies’ rush to roll out untested, unregulated, and unpredictable technology could deceive users, undermine artists’ work, and cause real-world harm. Because they are designed to generate convincing phrasing, artificial intelligence language models often provide wrong answers. Moreover, these models are trained using information and images from the Internet and learn to imitate human biases. OpenAI researchers wrote in a technical report that "as GPT-4 and similar artificial intelligence systems are widely adopted," they "will strengthen inherent insights." Irene Solaiman, a former researcher at OpenAI and policy director at Hugging Face, an open source artificial intelligence company, believes that the speed of this technological advancement requires the entire society to respond to potential problems in a timely manner. She further said, "As a society, we can already reach broad consensus on some harms that should not be caused by models," but "many of the harms are subtle and primarily affect minority groups." She added that those harmful biases "cannot become secondary considerations for AI performance."The latest GPT-4 is not completely stable. When a user congratulated the AI tool on its upgrade to GPT-4, its response was "I'm still a GPT-3 model." Then after being corrected it apologized for this and said: "As GPT-4, I thank you for your congratulations!" The user then joked that it was actually still a GPT-3 model, and the AI apologized again, saying that it " It is indeed a GPT-3 model, not GPT-4".
OpenAI spokesman Felix said the company’s research team is investigating what went wrong.
On Tuesday, artificial intelligence researchers criticized OpenAI for not disclosing enough information. The company has not released data on its assessment of biases in GPT-4. Eager engineers were also disappointed to find few details about the GPT-4 model, dataset or training methods. OpenAI said in the technical report that it would not disclose these details due to the "competitive landscape and security implications" it faces.
The multi-sensory artificial intelligence field in which GPT-4 operates is highly competitive. DeepMind, the artificial intelligence company owned by Google parent Alphabet, last year released a do-it-all model called Gato that can describe images and play video games. Google this month released PaLM-E, a multi-modal system that integrates artificial intelligence vision and language analysis into single-arm robots. For example, if someone asks it to pick up some chips, it can understand the request, turn to the drawer, and select the appropriate object.
Similar systems have inspired boundless optimism about the technology’s potential, with some even seeing levels of intelligence nearly as high as humans. However, as critics and AI researchers argue, these systems simply find established patterns and inherent correlations in repeated training data without a clear understanding of their meaning.
GPT-4 is the fourth “generative pre-trained transformer” since OpenAI’s initial release in 2018, based on the breakthrough Neural Network technology developed in 2017 "converter". Such systems, which are "pretrained" by analyzing text and images online, have led to rapid advances in how artificial intelligence systems analyze human speech and images.
Over the years, OpenAI has also fundamentally changed the potential social risks of releasing artificial intelligence tools to the masses. In 2019, the company declined to publicly release GPT-2, saying that while the AI performed very well, they were worried about "malicious applications" that would use it. But in November last year, OpenAI publicly launched a fine-tuned version of ChatGPT based on GPT-3. Within just a few days of launch, it surpassed 1 million users. Public experiments with ChatGPT and the Bing chatbot show that the technology is far from perfect performance without human intervention. After a series of strange conversations and incorrect answers, Microsoft executives acknowledged that AI chatbots are still not trustworthy when it comes to providing correct answers, but said they were developing "confidence metrics" to address the issue. GPT-4 promises to improve some shortcomings, and artificial intelligence advocates such as technology blogger Robert Scoble believe that "GPT-4 is better than anyone expected." OpenAI CEO Sam Altman has tried to temper expectations for GPT-4. He said in January that speculation about GPT-4's capabilities had reached impossible heights and that "the rumors about GPT-4 are ridiculous" and "they will be disappointed." But Ort Mann is also promoting OpenAI’s vision. In a blog post last month, he said the company was planning how to ensure "all humanity" benefits from "artificial universal energy" (AGI). This industry term refers to the still unrealistic idea of having a super artificial intelligence as smart as, or even smarter than, humans.The above is the detailed content of Foreign media comments on GPT-4: A huge leap in the field of artificial intelligence and another major change in technical ethics. For more information, please follow other related articles on the PHP Chinese website!