Home > Technology peripherals > AI > body text

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

WBOY
Release: 2024-04-23 17:22:11
forward
909 people have browsed it

Last week, Microsoft airborne an open source model called WizardLM-2 that can be called GPT-4 level.

But I didn’t expect that it would be deleted immediately a few hours after it was posted.

Some netizens suddenly discovered that WizardLM’s model weights and announcement posts had all been deleted and were no longer in the Microsoft collection. Apart from mentioning the site, they could not find any evidence to prove this. An official project from Microsoft.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

The GitHub project homepage has become a 404.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

Project address: https://wizardlm.github.io/

Include the model in HF All the weights on it have also disappeared...

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

The whole network is full of confusion, why is WizardLM gone?

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

However, the reason Microsoft did this was because the team forgot to "test" the model.

Later, the Microsoft team appeared to apologize and explained that it had been a while since WizardLM was released a few months ago, so we were not familiar with the new release process now.

We accidentally left out an item that is required in the model release process: poisoning testing

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

Microsoft WizardLM upgrade to second generation

In June last year, the first-generation WizardLM, which was fine-tuned based on LlaMA, was released and attracted a lot of attention from the open source community.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

Paper address: https://arxiv.org/pdf/2304.12244.pdf

Follow , the code version of WizardCoder was born - a model based on Code Llama and fine-tuned using Evol-Instruct.

The test results show that WizardCoder’s pass@1 on HumanEval reached an astonishing 73.2%, surpassing the original GPT-4.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

As the time progresses to April 15, Microsoft developers officially announced a new generation of WizardLM, this time fine-tuned from Mixtral 8x22B.

It contains three parameter versions, namely 8x22B, 70B and 7B.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

The most noteworthy thing is that in the MT-Bench benchmark test, the new model achieved a leading advantage.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

Specifically, the performance of the largest parameter version of the WizardLM 8x22B model is almost close to GPT-4 and Claude 3.

Under the same parameter scale, the 70B version ranks first.

The 7B version is the fastest and can even achieve performance comparable to the leading model with a parameter scale 10 times larger.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

The secret behind WizardLM 2’s outstanding performance lies in Evol-Instruct, a revolutionary training methodology developed by Microsoft.

Evol-Instruct utilizes large language models to iteratively rewrite the initial instruction set into increasingly complex variants. These evolved instruction data are then used to fine-tune the base model, significantly improving its ability to handle complex tasks.

The other is the reinforcement learning framework RLEIF, which also played an important role in the development process of WizardLM 2.

In WizardLM 2 training, the AI ​​Align AI (AAA) method is also used, which allows multiple leading large models to guide and improve each other.

The AAA framework consists of two main components, namely "co-teaching" and "self-study".

Co-teaching In this phase, WizardLM works with a variety of licensed open source and proprietary advanced models to conduct simulation chats, quality critiques, suggestions for improvements, and closing skill gaps.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

By communicating with each other and providing feedback, models learn from their peers and improve their capabilities.

For self-study, WizardLM can generate new evolutionary training data for supervised learning and preference data for reinforcement learning through active self-study.

This self-learning mechanism allows the model to continuously improve performance by learning from the data and feedback information it generates.

In addition, the WizardLM 2 model was trained using the generated synthetic data.

In the view of researchers, training data for large models is increasingly depleted, and it is believed that data carefully created by AI and models gradually supervised by AI will be the only way to more powerful artificial intelligence.

So they created a fully AI-driven synthetic training system to improve WizardLM-2.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

Fast netizens have already downloaded the weight

However, before the database was deleted, Many people have downloaded the model weights.

Several users also tested the model on some additional benchmarks before it was removed.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

Fortunately, the netizens who tested it were impressed by the 7B model and said that it would be their first choice for performing local assistant tasks.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

Someone also conducted a poisoning test and found that the WizardLM-8x22B scored 98.33, while the base Mixtral-8x22B scored 89.46. Mixtral 8x7B-Indict scored 92.93.

The higher the score, the better, which means that WizardLM-8x22B is still very strong.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

If there is no poisoning test, it is absolutely impossible to send the model out.

It is well known that large models are prone to hallucinations.

If WizardLM 2 outputs "toxic, biased, and incorrect" content in the answer, it is not friendly to large models.

In particular, these mistakes attract the attention of the entire network, and Microsoft itself will also fall into criticism and even be investigated by the authorities.

Some netizens wondered, you can update the indicators through "poisoning test". Why delete the entire repository and weight?

The Microsoft author stated that according to the latest internal regulations, this can only be done.

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

Some people also said that we want models without "lobotomy".

Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test

#However, developers still need to wait patiently, and the Microsoft team promises that it will go back online after the test is completed.

The above is the detailed content of Within hours of release, Microsoft deleted a large open source model comparable to GPT-4 in seconds! Forgot to take the poison test. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template