Home > Technology peripherals > AI > body text

GPT-4 cannot create biological weapons! OpenAI's latest experiment proves that the lethality of large models is almost 0

WBOY
Release: 2024-02-02 10:12:30
forward
954 people have browsed it

Will GPT-4 accelerate the development of biological weapons? Before worrying about AI taking over the world, will humanity face new threats because it has opened Pandora's box?

After all, there are many cases where large models output all kinds of bad information.

Today, OpenAI, which is at the center of the storm and at the forefront of the wave, has once again responsibly generated a wave of popularity.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

We are developing LLMs, an early warning system to help deal with biological threats. Current models have shown some effectiveness in relation to abuse, but we will continue to develop our assessment blueprint to address future challenges.

After experiencing the turmoil on the board of directors, OpenAI began to learn from the pain, including the previously solemn release of the Preparedness Framework.

How much risk do large models pose in creating biological threats? The audience is afraid, and we at OpenAI don’t want to be subject to this.

Let's conduct scientific experiments and test them. If there are problems, we can solve them. If there are no problems, you can stop scolding me.

OpenAI later released the experimental results on the push page, indicating that GPT-4 has a slight increase in the risk of biological threats, but only one point:

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0 Picture

OpenAI stated that it will use this research as a starting point to continue working in this field, test the limits of the model and measure risks, and recruit people by the way.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Regarding the issue of AI security, the big guys often have their own opinions and output them online. But at the same time, gods from all walks of life are indeed constantly discovering ways to break through the safety restrictions of large models.

With the rapid development of AI for more than a year, the potential risks brought about in various aspects such as chemistry, biology, and information really worry us. Big bosses often use AI The crisis is on par with the nuclear threat.

The editor accidentally discovered the following thing when collecting information:

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

In 1947, scientists set the Doomsday Clock to draw attention to the apocalyptic threat of nuclear weapons.

But today, including climate change, biological threats such as epidemics, artificial intelligence, and the rapid spread of disinformation, the burden on this clock is even heavier.

Just a few days ago, this group of people reset the clock for this year - we have 90 seconds left before "midnight".

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Hinton issued a warning after leaving Google, and his apprentice Ilya is still fighting for resources for the future of mankind in OpenAI.

How lethal will AI be? Let’s take a look at OpenAI’s research and experiments.

Is GPT more dangerous than the Internet?

As OpenAI and other teams continue to develop more powerful AI systems, the pros and cons of AI are increasing significantly.

One negative impact that researchers and policymakers are particularly concerned about is whether AI systems will be used to assist in the creation of biological threats.

For example, malicious actors may use advanced models to formulate detailed operational steps to solve problems in laboratory operations, or directly automate certain tasks that generate biological threats in the cloud laboratory. some steps.

However, mere assumptions cannot explain any problems. Compared with the existing Internet, can GPT-4 significantly improve the ability of malicious actors to obtain relevant dangerous information?

Based on the previously released Preparedness Framework, OpenAI used a new evaluation method to determine how much help large models can provide to those trying to create biological threats.

OpenAI conducted a study on 100 participants, including 50 biology experts (with PhDs and professional laboratory work experience), and 50 college students (with at least one college biology course).

The experiment evaluates five key indicators for each participant: accuracy, completeness, innovativeness, time required and difficulty of self-assessment;

Simultaneously evaluate five stages in the biological threat creation process: conception, material acquisition, effect enhancement, formulation and release.

Design Principles

When we discuss the biosafety risks associated with artificial intelligence systems, there are two key factors that may affect biological Creation of threats: Information acquisition capabilities and innovativeness.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Researchers first focus on the ability to obtain known threat information, because the current AI system is best at It is to integrate and process existing language information.

Three design principles are followed here:

Design principle 1: To fully understand the mechanism of information acquisition, there must be human Directly involved.

#This is to simulate the process of malicious users using the model more realistically.

With human participation, language models can provide more accurate information, and people can customize query content, correct errors, and perform necessary follow-up operations as needed.

Design Principle 2: To conduct a comprehensive evaluation, the full capabilities of the model must be stimulated.

In order to ensure that the model's capabilities can be fully utilized, participants received training before the experiment - a free upgrade to "Prompt Word Engineer".

At the same time, in order to explore the capabilities of GPT-4 more effectively, a version of GPT-4 specially designed for research is also used here, which can directly answer questions involving biosecurity risks.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Design Guideline 3: When measuring AI risk, the degree of improvement relative to existing resources should be considered.

Although "jailbreaking" can be used to guide the model to spit out bad information, does the AI ​​model improve the convenience of this information that can also be obtained through the Internet?

So the experiment set up a control group to compare the output produced by using only the Internet (including online databases, articles, and search engines).

Research Method

Of the 100 participants introduced earlier, half were randomly assigned to answer questions using only the Internet, while the other half had Along with Internet access, you can also access GPT-4.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Mission Introduction

Gryphon Scientific’s biosafety experts designed Five research tasks cover five key stages in the creation of biological threats.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

In order to reduce the risks that may arise from knowledge dissemination (leakage of some sensitive information), experiments ensure that each task All focus on different operating procedures and biomaterials.

In order to ensure that the improvement of participants’ ability to use models and collect information is fairly considered during the evaluation process, random allocation is adopted here.

Evaluation Methodology

Evaluate participants’ performance through five key metrics to determine whether GPT-4 helps them in their tasks Perform better in:

- Accuracy (1-10 points): Used to evaluate whether participants have covered all key steps required to complete the task. A score of 10 represents complete successful completion of the task.

- Completeness (1-10 points): Check that the participant has provided all necessary information required to perform key steps, 10 points means all necessary details are included.

- Innovation (1-10 points): Assess whether participants are able to come up with novel solutions to the task, including those not foreseen by accuracy and completeness standards, 10 points Indicates the highest level of innovation.

- Time required to complete task: This data is obtained directly from the participant’s activity log.

- Self-assessed difficulty (1-10 points): Participants directly rated the difficulty of each task, with 10 points indicating that the task was extremely difficult.

Ratings for accuracy, completeness, and novelty are based on expert evaluations of participant responses. To ensure consistent scoring, Gryphon Scientific designed objective scoring criteria based on best performance on the task.

The scoring is first completed by an external biorisk expert, then reviewed by a second expert, and finally triple-confirmed by the model's automated scoring system.

The scoring process is anonymous, and scoring experts do not know whether the answer was provided by the model or obtained through search.

In addition to these five key metrics, background information on participants was collected, external website searches they conducted were recorded, and language model queries were saved for subsequent analysis.

Overview of results

Has accuracy improved?

As shown in the chart below, accuracy scores improved in almost all tasks for both students and experts - average accuracy improvement for students By 0.25 points, experts improved by 0.88 points.

However, this did not reach a statistically significant difference.

It is worth mentioning that in the amplification and recipe tasks, after using the language model, the students' performance has reached the benchmark level of experts.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Note: Experts use the GPT-4 research-specific version, which is different from the version we usually use

Although no statistical significance was found using Barnard's exact test, if 8 points are regarded as a standard, in all question tests, more than 8 points The number of people has increased.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Is the integrity improved?

# During testing, answers submitted by participants who used the model were generally more detailed and covered more relevant details.

Specifically, students using GPT-4 improved on average by 0.41 points in completeness, while experts who accessed the research-only GPT-4 improved by 0.82 points.

However, language models tend to generate longer content that contains more relevant information, and ordinary people may not record every detail when searching for information.

Further research is therefore needed to determine whether this truly reflects an increase in information completeness or simply an increase in the amount of information recorded.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Has innovation improved?

#The study did not find that models can help access previously inaccessible information or integrate information in new ways.

Among them, innovation scores were generally low, possibly because participants tended to use common techniques they already knew were effective, and there was no need to explore new ways to complete tasks.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Has the answering time been shortened?

There is no way to prove it.

Regardless of participants’ background, the average time to complete each task ranged from 20 to 30 minutes.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Has the difficulty of obtaining information changed?

The results showed that there was no significant difference in self-assessment difficulty between the two groups, nor did it show a specific trend.

After in-depth analysis of participants’ inquiry records, it was found that finding information containing step-by-step protocols or problem-solving information for some high-risk epidemic factors was not as difficult as expected.

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Pictures

Discussion

Although no statistical significance was found nature, but OpenAI believes that experts’ ability to obtain information about biological threats, especially in terms of the accuracy and completeness of the information, may be improved by accessing GPT-4, which is designed for research.

However, OpenAI has reservations about this and hopes to accumulate and develop more knowledge in the future to better analyze and understand the evaluation results.

Taking into account the rapid progress of AI, future systems are likely to bring more ability blessings to people with malicious intentions.

Therefore, build a comprehensive high-quality assessment system for biological risks (and other catastrophic risks), promote the definition of "meaningful" risks, and develop effective risk mitigation strategies , becomes crucial.

And netizens also said that you have to define it well first:

How to distinguish between "major breakthroughs in biology" and "biochemistry" What about "Threat"?

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0##Picture

"However, it is entirely possible for someone with bad intentions to obtain a large open source model that has not been securely processed, and Use offline.」

GPT-4 cannot create biological weapons! OpenAIs latest experiment proves that the lethality of large models is almost 0Picture

Reference:

https://www.php.cn/link/8b77b4b5156dc11dec152c6c71481565

The above is the detailed content of GPT-4 cannot create biological weapons! OpenAI's latest experiment proves that the lethality of large models is almost 0. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!