Home Technology peripherals AI Is GPT-5 going to be stopped? OpenAI issued a response in the early morning: To ensure the safety of AI, we do not 'cut corners'

Is GPT-5 going to be stopped? OpenAI issued a response in the early morning: To ensure the safety of AI, we do not 'cut corners'

Apr 07, 2023 pm 02:48 PM
AI system

In recent days, it can be described as a "troubled time" for OpenAI.

Due to the security issues that ChatGPT and GPT-4 may cause, OpenAI has received some criticism and obstruction from the outside world:

  • Musk and thousands of other people jointly called for "all artificial intelligence The laboratory should immediately suspend training large models more powerful than GPT-4 for at least 6 months";
  • Italy banned ChatGPT, OpenAl "must notify them through its representative in Europe within 20 days Notify the company of the measures taken to implement this requirement";
  • ChatGPT has banned a large number of accounts;
  • ChatGPT Plus has been discontinued;
  • ......

These events show that although AI has proven to have the ability to bring many benefits to human society, technology is always a double-edged sword and can also bring real risks to human society, and AI is no exception .

On April 6, OpenAI officially released a blog article titled "Our approach to AI safety", which discussed how to "safely build, deploy and use artificial intelligence systems."

Is GPT-5 going to be stopped? OpenAI issued a response in the early morning: To ensure the safety of AI, we do not cut corners

#OpenAI is committed to keeping strong artificial intelligence safe and broadly beneficial. Our AI tools provide many benefits to people today.

Users from around the world tell us that ChatGPT helps increase their productivity, enhance their creativity, and provide a tailored learning experience.

We also recognize that, like any technology, these tools come with real risks - so we work hard to ensure security is built into our systems at every level.

1. Build increasingly secure artificial intelligence systems

Before releasing any new system, we conduct rigorous testing, involve external experts for feedback, and strive to leverage reinforcement learning from human feedback Technologies such as improving model behavior and establishing extensive safety and monitoring systems.

For example, after our latest model, GPT-4, completed training, all of our staff spent over 6 months making it more secure and consistent before its public release.

We believe that powerful artificial intelligence systems should undergo rigorous security assessments. Regulation is needed to ensure this approach is adopted and we are actively engaging with government to explore the best form this regulation might take.

2. Learn from real-world use to improve safeguards

We strive to prevent foreseeable risks before deployment, however, what we can learn in the laboratory is limited. Despite extensive research and testing, we cannot predict all of the beneficial ways people use our technology, or all the ways people misuse it. That’s why we believe that learning from real-world use is a key component to creating and releasing increasingly secure AI systems over time.

We carefully release new AI systems incrementally, with plenty of safeguards in place, pushing them out to a steadily expanding population, and continually improving based on what we learn.

We provide our most capable models through our own services and APIs so developers can use this technology directly in their applications. This allows us to monitor and take action on abuse and continually build mitigations for the real ways people abuse our systems, not just theories about what abuse might look like.

Real-world use has also led us to develop increasingly nuanced policies to prevent behaviors that pose real risks to people, while also allowing for many beneficial uses of our technology.

Crucially, we believe society must be given time to update and adjust to increasingly capable AI, and that everyone affected by this technology should have an understanding of AI’s Have an important say in further development. Iterative deployment helps us bring various stakeholders into the conversation about adopting AI technologies more effectively than they would if they had not experienced these tools first-hand.

3. Protecting Children

A key aspect of safety is protecting children. We require that people using our AI tools be 18 or older, or 13 or older with parental approval, and we are working on verification options.

We do not allow our technology to be used to generate hateful, harassing, violent or adult content, among other (harmful) categories. Our latest model, GPT-4, has an 82% lower response rate for disallowed content requests compared to GPT-3.5, and we have built a robust system to monitor abuse. GPT-4 is now available to ChatGPT Plus users, and we hope to make it available to more people over time.

We put a lot of effort into minimizing the likelihood that our models will produce content that is harmful to children. For example, when a user attempts to upload child-safe abuse material to our image tools, we block the action and report it to the National Center for Missing and Exploited Children.

In addition to our default safety guardrails, we work with developers like the nonprofit Khan Academy – which built an AI-powered assistant that serves as both a virtual tutor for students and a classroom assistant for teachers --Customize security mitigations for their use cases. We are also developing features that will allow developers to set more stringent standards for model output to better support developers and users who want this functionality.

4. Respect Privacy

Our large language model is trained on an extensive corpus of text, including public, authorized content, and content generated by human reviewers . We don’t use data to sell our services, ads or build profiles on people, we use data to make our models more helpful to people. ChatGPT, for example, improves capabilities by further training people on conversations with it.

While some of our training data includes personal information on the public internet, we want our models to learn about the world, not the private world. Therefore, we work to remove personal information from training data sets where feasible, fine-tune our models to deny requests for private information, and respond to requests from individuals to have their personal information removed from our systems. These steps minimize the possibility that our model could produce content that includes private information.

5. Improve factual accuracy

Large language models predict and generate the next sequence of words based on patterns they have seen previously, including text input provided by the user. In some cases, the next most likely word may not be factually accurate.

Improving factual accuracy is an important job for OpenAI and many other AI developers, and we are making progress. By leveraging user feedback on ChatGPT outputs that were flagged as incorrect as the primary data source.

We recognize that there is much more work to be done to further reduce the likelihood of hallucinations and educate the public about the current limitations of these artificial intelligence tools.

6. Ongoing Research and Engagement

We believe that a practical way to address AI safety issues is to invest more time and resources into researching effective mitigation measures and adapting technologies, and targeting Tested with real world abuse.

Importantly, we believe that improving the safety and capabilities of AI should go hand in hand. To date, our best security work has come from working with our most capable models because they are better at following user instructions and are easier to guide or "coach."

We will be increasingly cautious as more capable models are created and deployed, and we will continue to strengthen security precautions as our AI systems further develop.

While we waited more than 6 months to deploy GPT-4 in order to better understand its capabilities, benefits, and risks, sometimes it can take longer than that to improve the performance of AI systems. safety. Therefore, policymakers and AI vendors will need to ensure that the development and deployment of AI is effectively managed globally, and that no one “cuts corners” in order to achieve success as quickly as possible. This is a difficult challenge that requires technological and institutional innovation, but it is also a contribution we are eager to make.

Addressing safety issues also requires widespread debate, experimentation and engagement, including on the boundaries of AI system behavior. We have and will continue to promote collaboration and open dialogue among stakeholders to create a safe AI ecosystem.

The above is the detailed content of Is GPT-5 going to be stopped? OpenAI issued a response in the early morning: To ensure the safety of AI, we do not 'cut corners'. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Huawei's Qiankun ADS3.0 intelligent driving system will be launched in August and will be launched on Xiangjie S9 for the first time Huawei's Qiankun ADS3.0 intelligent driving system will be launched in August and will be launched on Xiangjie S9 for the first time Jul 30, 2024 pm 02:17 PM

On July 29, at the roll-off ceremony of AITO Wenjie's 400,000th new car, Yu Chengdong, Huawei's Managing Director, Chairman of Terminal BG, and Chairman of Smart Car Solutions BU, attended and delivered a speech and announced that Wenjie series models will be launched this year In August, Huawei Qiankun ADS 3.0 version was launched, and it is planned to successively push upgrades from August to September. The Xiangjie S9, which will be released on August 6, will debut Huawei’s ADS3.0 intelligent driving system. With the assistance of lidar, Huawei Qiankun ADS3.0 version will greatly improve its intelligent driving capabilities, have end-to-end integrated capabilities, and adopt a new end-to-end architecture of GOD (general obstacle identification)/PDP (predictive decision-making and control) , providing the NCA function of smart driving from parking space to parking space, and upgrading CAS3.0

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Seven Cool GenAI & LLM Technical Interview Questions Seven Cool GenAI & LLM Technical Interview Questions Jun 07, 2024 am 10:06 AM

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Five schools of machine learning you don't know about Five schools of machine learning you don't know about Jun 05, 2024 pm 08:51 PM

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

See all articles