Home > Technology peripherals > AI > body text

Stanford's 7 billion parameter open source model is comparable to GPT-3.5 and can be reproduced for $100

PHPz
Release: 2023-04-13 16:04:03
forward
1098 people have browsed it

As large-scale language models become increasingly powerful, people have put forward higher ethical requirements for AI models. The industry has the advantage of computing resources in terms of model scale expansion, but making the model more standardized and reliable requires the efforts of the academic community.

Recently, Stanford fine-tuned a new model Alpaca based on Meta's LLaMA 7B model. This study used OpenAI's text-davinci-003 model to generate 52K instruction-following samples in a self-instruct manner as training data for Alpaca. The research team has open sourced the training data, the code to generate the training data, and the hyperparameters, and will release the model weights and training code in the future.

Stanfords 7 billion parameter open source model is comparable to GPT-3.5 and can be reproduced for $100

  • ## Project address: https://github.com/tatsu-lab/stanford_alpaca
  • Trial address: https://alpaca-ai-custom6.ngrok.io/

Experimental results show that many behaviors of Alpaca are similar to text-davinci-003. In other words, the performance of Alpaca, a lightweight model with only 7B parameters, is comparable to very large-scale language models such as GPT-3.5.

Let’s take a look at how the Alpaca model does it.

Training Method

Training high-quality instruction following models within the budget conditions of academia faces two important challenges: powerful pre-trained language models and high-quality The instructions follow the data.

Meta’s recently released LLaMA family of models addresses the first challenge. For the second challenge, the self-instruct paper at the end of 2022 proposes to use existing powerful language models to automatically generate instruction data.

Stanfords 7 billion parameter open source model is comparable to GPT-3.5 and can be reproduced for $100

##Paper address: https://arxiv.org/abs/2212.10560

According to this method, Alpaca uses supervised learning of the LLaMA 7B model to fine-tune on the 52K instruction follow samples generated by text-davinci-003 in a self-instruct manner.

Stanfords 7 billion parameter open source model is comparable to GPT-3.5 and can be reproduced for $100

Overview of the self-instruct method.

Alpaca's research team first used 175 manually written instruction-output pairs in the self-instruct seed set, and then used this seed set as in-context sample prompt text-davinci-003 to generate more instructions. This research improves the self-instruct method by simplifying the build pipeline and significantly reduces costs.

Stanfords 7 billion parameter open source model is comparable to GPT-3.5 and can be reproduced for $100

##The study generated a total of 52K different instructions and corresponding outputs as training data, which used OpenAI API that costs less than $500. Since the research team has open sourced the training data, developers who want to reproduce Alpaca can save $500.

Stanfords 7 billion parameter open source model is comparable to GPT-3.5 and can be reproduced for $100

With this instruction following dataset, the next step of the research was to fine-tune the LLaMA model using Hugging Face’s training framework. And utilizes technologies such as FSDP (Fully Sharded Data Parallel) and mixed precision training. Cost-wise, fine-tuning a 7B LLaMA model on eight 80GB A100s takes 3 hours, which costs less than $100 for most cloud providers.

Model Evaluation

The study was manually evaluated using input from a self-instruct evaluation set, which was completed by 5 students on the research team. The evaluation set was collected by the authors of the self-instruct paper and covers a variety of user-oriented instructions involving email, social media, and office tools.

After blind pairwise comparison of text-davinci-003 and Alpaca 7B, the researchers found that the performance of the two models was very similar, and Alpaca was slightly better than text-davinci-003.

From the perspective of parameter scale, Alpaca is far smaller than text-davinci-003, and the mobile terminal can even run a 7B lightweight language model. This makes Alpaca significant.

In addition to utilizing the static self-instruct evaluation set mentioned above, this study also conducted interactive testing on the Alpaca model and found that Alpaca generally performed similarly to text-davinci-003.

The following are two examples tested by the research team, and the results show that Alpaca's output is good and reflects the general style of the instruction following data set. For example, Alpaca often outputs more concise answers than ChatGPT, similar to text-davinci-003.

Model defects

In the experiment, Alpaca also showed several common defects of language models, including hallucinations, toxicity and stereotypes, among which the hallucination problem is particularly serious.

For example, in the picture below, Alpaca answered that the capital of Tanzania is Dar es Salaam, but it should actually be Dodoma.

Stanfords 7 billion parameter open source model is comparable to GPT-3.5 and can be reproduced for $100

Additionally, Alpaca is capable of generating text that may appear good but contain errors or false information, which may mislead people. .

Stanfords 7 billion parameter open source model is comparable to GPT-3.5 and can be reproduced for $100

Alpaca may contain a number of other flaws related to the underlying language model and instruction tuning data. However, Alpaca remains important to the machine learning community because it provides a relatively lightweight model that can serve as a basis for studying important flaws. The Stanford research team also emphasized that Alpaca can only be used for academic research and any commercial use is prohibited.

Next, the Stanford research team will further explore the security, understanding ability, scale expansion, etc. of the Alpaca model. The research team hopes Alpaca will facilitate the development of instruction-following models.

The above is the detailed content of Stanford's 7 billion parameter open source model is comparable to GPT-3.5 and can be reproduced for $100. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template