Home > Technology peripherals > AI > body text

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

王林
Release: 2023-06-07 19:37:44
forward
666 people have browsed it

Recently, a tweet by Matthias Plappert ignited widespread discussion in the LLMs circle.

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

Plappert is a well-known computer scientist. He published his benchmark test results on the mainstream LLM in the AI ​​​​circle on HumanEval.

His testing is biased towards code generation.

The results are both shocking and shocking.

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

Unexpectedly, GPT-4 undoubtedly dominated the list and took first place.

Unexpectedly, OpenAI’s text-davinci-003 suddenly emerged and took second place.

Plappert said that text-davinci-003 can be called a "treasure" model.

The familiar LLaMA is not good at code generation.

OpenAI dominates the list

Plappert said that the performance of GPT-4 is even better than the data in the literature.

The GPT-4 round of test data in the paper has a pass rate of 67%, while Plappert’s test reached 73%.

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

When analyzing the causes, he said that there are many possibilities for discrepancies in the data. One of them is that the prompt he gave to GPT-4 was slightly better than when the author of the paper tested it.

Another reason is that he guessed that the temperature of the model was not 0 when the paper tested GPT-4.

"Temperature" is a parameter used to adjust the creativity and diversity of text generated by the model. "Temperature" is a value greater than 0, usually between 0 and 1. It affects the probability distribution of sampled predicted words when the model generates text.

When the "temperature" of the model is higher (such as 0.8, 1 or higher), the model will be more inclined to choose from more diverse and different words, which makes the generated Texts are riskier and more creative, but may also produce more errors and incoherencies.

When the "temperature" is low (such as 0.2, 0.3, etc.), the model will mainly select from words with higher probability, thus producing smoother and more coherent text .

But at this point, the generated text may appear too conservative and repetitive.

Therefore, in actual applications, it is necessary to weigh and select the appropriate "temperature" value based on specific needs.

Next, when commenting on text-davinci-003, Plappert said that this is also a very capable model under OpenAI.

Although it is not as good as GPT-4, the pass rate of 62% in one round of testing can still firmly win the second place.

Plappert emphasized that the best thing about text-davinci-003 is that users do not need to use ChatGPT’s API. This means giving prompts can be simpler.

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

In addition, Plappert also gave Anthropic AI’s claude-instant model a relatively high evaluation.

He believes that the performance of this model is good and can beat GPT-3.5. The pass rate of GPT-3.5 is 46%, while the pass rate of claude-instant is 54%.

Of course, Anthropic AI’s other LLM, claude, cannot be played by claude-instant, and the pass rate is only 51%.

Plappert said that the prompts used to test the two models are the same. If it doesn’t work, it doesn’t work.

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

In addition to these familiar models, Plappert has also tested many open source small models.

Plappert said that it is good that he can run these models locally.

However, in terms of scale, these models are obviously not as big as those of OpenAI and Anthropic AI, so comparing them is a bit overwhelming.

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

LLaMA code generation? Pulling the hips

Of course, Plappert was not satisfied with the LLaMA test results.

Judging from the test results, LLaMA performed very poorly in generating code. Probably because they used under-sampling when collecting data from GitHub.

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

Even compared with Codex 2.5B, the performance of LLaMA is not the same. (Pass rate 10% vs. 22%)

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

Finally, he tested Replit’s 3B size model.

He said that the performance was not bad, but compared with the data promoted on Twitter (pass rate 16% vs. 22%)

Plappert believes this may be because the quantification method he used when testing the model caused the pass rate to drop by a few percentage points.

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

At the end of the review, Plappert mentioned a very interesting point.

A user discovered on Twitter that GPT-3.5-turbo performs better when using the Completion API of the Azure platform (rather than the Chat API) good.

Plappert believes that this phenomenon has some legitimacy, because entering prompts through the Chat API can be quite complicated.

OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.

The above is the detailed content of OpenAI dominates the top two! The large model code generation ranking list is released, with 7 billion LLaMA surpassing it and being beaten by 250 million Codex.. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template