Home > Technology peripherals > AI > body text

LeCun praises $600 GPT-3.5 as a hardware replacement! Stanford's 7 billion parameter 'Alpaca' is popular, LLaMA performs amazingly!

王林
Release: 2023-04-23 16:04:08
forward
1033 people have browsed it

When I woke up, the Stanford large model Alpaca became popular.

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

Yes, Alpaca is a brand new model fine-tuned from Meta’s LLaMA 7B. Only 52k data is used, and the performance is approximately equal to GPT-3.5.

The key is that the training cost is extremely low, less than 600 US dollars. The specific cost is as follows:

trained on 8 80GB A100s for 3 hours, less than 100 US dollars;

Generate data using OpenAI’s API, $500.

Percy Liang, associate professor of computer science at Stanford University, said due to lack of transparency/inability to fully access capable instruction models like GPT 3.5 , thus limiting academic research in this important field. We took a small step forward with Alpaca (LLaMA 7B text-davinci-003).

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

Seeing that someone else has achieved new results on their own large model, Yann LeCun retweeted it like crazy (publicity must be in place) ).

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

## Fine-tuning 7 billion parameters in 3 hours, 600 Get it with US dollars

Without further ado, let’s try it out first.

Q: Tell me a little about alpacas.

Answer: Alpacas are a species of camel from South America, related to llamas and camels. They are often raised for their fiber and used to make luxury goods such as sweaters and blankets. Alpacas are also popular pets due to their gentle and friendly nature.

Experience link: https://crfm.stanford.edu/alpaca/

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

For the Stanford team, if they want to train a high-quality instruction following model within the budget, they must face two important challenges: having a powerful of pre-trained language models, and a high-quality instruction-following data.

# Exactly, the LLaMA model provided to academic researchers solved the first problem.

For the second challenge, the paper "Self-Instruct: Aligning Language Model with Self Generated Instructions" gave a good inspiration, that is, using existing Strong language model to automatically generate instruction data.

# However, the biggest weakness of the LLaMA model is the lack of instruction fine-tuning. One of OpenAI's biggest innovations is the use of instruction tuning on GPT-3.

#In this regard, Stanford used an existing large language model to automatically generate demonstrations of following instructions.

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

Start with 175 manually written "instruction-output" pairs from the self-generated instruction seed set, then, prompt text- davinci-003 uses a seed set as contextual examples to generate more instructions.

#The method of self-generating instructions has been improved by simplifying the generation pipeline, which greatly reduces the cost. During the data generation process, 52K unique instructions and corresponding outputs were produced, costing less than $500 using the OpenAI API.

With this dataset to follow, the researchers used Hugging Face’s training framework to fine-tune the LLaMA model, taking advantage of fully sharded data parallelism (FSDP). ) and mixed precision training and other techniques.

Also, fine-tuning a 7B LLaMA model took over 3 hours on 8 80GB A100s, at the cost of most cloud providers Less than $100.

Approximately equal to GPT-3.5

To evaluate Alpaca, Stanford researchers performed human evaluation on inputs from a self-generated instruction evaluation set (authored by Conducted by 5 student authors).

This set of reviews was collected by self-generated instruction authors and covers a diverse range of user-facing instructions, including email writing, social media, and productivity tools wait.

They compared GPT-3.5 (text-davinci-003) and Alpaca 7B and found that the performance of the two models was very similar. Alpaca wins 90 versus 89 times against GPT-3.5.

#Given the small size of the model and the small amount of instruction data, this result is quite amazing.

In addition to utilizing this static evaluation set, they also conducted interactive tests on the Alpaca model and found that Alpaca's performance on various inputs was often the same as GPT-3.5 resemblance.

Stanford Demo with Alpaca:

Demo 1 Let Alpaca Talk The difference between myself and LLaMA.

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

Demonstration 2 asked Alpaca to write an email, the content was concise and clear, and the format was very standard.

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

As can be seen from the above example, Alpaca’s output results are generally well written, and Answers are generally shorter than ChatGPT, reflecting the style of GPT-3.5's shorter output.

#Of course, Alpaca exhibits common flaws in language models.

#For example, the capital of Tanzania is called Dar es Salaam. In fact, after 1974, Dodoma became the new capital of Tanzania, and Dar es Salaam was only the largest city in Tanzania.

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

Alpaca spreads the wrong message when writing a thoughtful abstract.

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

In addition, Alpaca may have many limitations related to the underlying language model and instruction fine-tuning data. However, Alpaca provides us with a relatively lightweight model that can form the basis for future studies of important flaws in larger models.

Currently, Stanford has only announced Alpaca’s training methods and data, and plans to release the weights of the model in the future.

#However, Alpaca cannot be used for commercial purposes and can only be used for academic research. There are three specific reasons:

1. LLaMA is a non-commercially licensed model, and Alpaca is generated based on this model;

2. The instruction data is based on OpenAI’s text-davinci-003, whose terms of use prohibit the development of models that compete with OpenAI;

#3. Not enough security measures were designed, so Alpaca is not ready for widespread use

In addition, the Stanford researchers concluded Alpaca’s future research will have three directions.

  • Evaluation:

From HELM (language model Holistic assessment) begins to capture more generative, follow-through scenarios.

  • Safety:

Further study of Alpaca’s risks, And improve its security using methods such as automated red teaming, auditing, and adaptive testing.

  • Understanding:

Hope to understand better How model capabilities emerge from training methods. What properties of the base model are required? What happens when you scale up your model? What attributes of the instruction data are required? On GPT-3.5, what are the alternatives to using self-generated directives?

Stable Diffusion of large models

Now, Stanford "Alpaca" is directly regarded as "Stable Diffusion of large text models" by netizens.

Meta’s LLaMA model can be used by researchers for free (after application, of course), which is a great benefit to AI circles.

Since the emergence of ChatGPT, many people have been frustrated by the built-in limitations of AI models. These restrictions prevent ChatGPT from discussing topics that OpenAI deems sensitive.

# Therefore, the AI ​​community hopes to have an open source large language model (LLM) that anyone can run locally without censorship or reporting to OpenAI Pay the API fee.

There are also large open source models like this, such as GPT-J, but the only drawback is that they require a lot of GPU memory and storage space.

#On the other hand, other open source alternatives cannot achieve GPT-3 level performance on off-the-shelf consumer hardware.

At the end of February, Meta launched the latest language model LLaMA, with parameter amounts of 7 billion (7B), 13 billion (13B), and 33 billion (33B). ) and 65 billion (65B). Evaluation results show that its 13B version is comparable to GPT-3.

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

Paper address: https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/

Although Meta opens source code to researchers who apply, it was unexpected that netizens first leaked the weight of LLaMA on GitHub.

#Since then, development around the LLaMA language model has exploded.

# Typically, running GPT-3 requires multiple data center-grade A100 GPUs, plus the weights for GPT-3 are not public.

Netizens started running the LLaMA model themselves, causing a sensation.

#Using quantization techniques to optimize model size, LLaMA can now run on M1 Macs, smaller Nvidia consumer GPUs, Pixel 6 phones, and even Raspberry Pi run on.

Netizens summarized some of the results that everyone has made using LLaMA from the release of LLaMA to now:

LLaMA was released on February 24 and is available under a non-commercial license to researchers and entities working in government, community and academia;

On March 2, 4chan netizens leaked all the LLaMA models;

##On March 10, Georgi Gerganov created the llama.cpp tool, LLaMA can be run on Mac equipped with M1/M2 chip;

March 11: 7B model can be run on 4GB RaspberryPi through llama.cpp, but The speed is relatively slow, only 10 seconds/token;

March 12: LLaMA 7B successfully ran on a node.js execution tool NPX;

March 13: llama.cpp can run on Pixel 6 phones;

And Now, Stanford Alpaca "Alpaca" is released.

One More Thing

Not long after the project was released, Alpaca became so popular that it was no longer usable....

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

Many netizens were noisy, and there was no response when they clicked "Generate", and some were waiting in line to play.

LeCun狂赞:600刀GPT-3.5平替! 斯坦福70亿参数「羊驼」爆火,LLaMA杀疯了

The above is the detailed content of LeCun praises $600 GPT-3.5 as a hardware replacement! Stanford's 7 billion parameter 'Alpaca' is popular, LLaMA performs amazingly!. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template