Home > Technology peripherals > AI > body text

New test benchmark released, the most powerful open source Llama 3 is embarrassed

PHPz
Release: 2024-04-23 12:13:10
forward
526 people have browsed it

If the test questions are too simple, both top students and bad students can get 90 points, and the gap cannot be widened...

With the development of stronger models such as Claude 3, Llama 3 and even GPT-5 Released, the industry is in urgent need of a more difficult and differentiated benchmark test.

LMSYS, the organization behind the large model arena, launched the next generation benchmark Arena-Hard, which attracted widespread attention.

The latest reference is also available for the strength of the fine-tuned versions of the two instructions of Llama 3.

新测试基准发布,最强开源Llama 3尴尬了

Compared with the previous MT Bench where everyone’s scores were similar, Arena-Hard’s discrimination increased from 22.6% to 87.4%, making it clear which one is stronger and which one is weaker.

Arena-Hard is built using real-time human data from the arena, and the consistency rate with human preferences is as high as 89.1%.

In addition to the above two indicators reaching SOTA, there is an additional benefit:

The real-time updated test data contains new ideas that humans have come up with that AI has never seen in the training phase. prompt words to mitigate potential data breaches.

After releasing a new model, instead of waiting a week or so for human users to vote, just spend $25 to quickly run a test pipeline and get the results.

Some netizens commented that it is really important to use real user prompt words instead of high school exams for testing.

新测试基准发布,最强开源Llama 3尴尬了

How does the new benchmark work?

To put it simply, 500 high-quality prompt words are selected as the test set from 200,000 user queries in the large model arena.

First, ensure diversity during the selection process, that is, the test set should cover a wide range of real-world topics.

To ensure this, the team adopted the topic modeling pipeline in BERTopic, first using OpenAI's embedding model (text-embedding-3-small) to convert each prompt, using UMAP to reduce the dimensionality, and using a hierarchical structure based on The model clustering algorithm (HDBSCAN) is used to identify clusters, and finally GPT-4-turbo is used for summary.

新测试基准发布,最强开源Llama 3尴尬了

At the same time, ensure that the selected prompt words are of high quality. There are seven key indicators to measure:

  • Specificity: whether the prompt words require specific Output?
  • Domain knowledge: Does the prompt word cover one or more specific fields?
  • Complexity: Does the prompt word have multiple layers of reasoning, components, or variables?
  • Problem solving: Does the prompt word directly allow AI to demonstrate its ability to proactively solve problems?
  • Creativity: Does the prompt word involve some level of creativity in problem solving?
  • Technical Accuracy: Does the prompt word require technical accuracy of the response?
  • Practical application: Are the prompt words relevant to practical applications?

新测试基准发布,最强开源Llama 3尴尬了

Use GPT-3.5-Turbo and GPT-4-Turbo to annotate each prompt from 0 to 7 to determine how many conditions are met. Each cluster is then scored based on the average score of the cues.

High-quality questions are often related to challenging topics or tasks, such as game development or mathematical proofs.

新测试基准发布,最强开源Llama 3尴尬了

Is the new benchmark accurate?

Arena-Hard currently has a weakness: using GPT-4 as a referee prefers its own output. Officials also gave corresponding tips.

It can be seen that the scores of the latest two versions of GPT-4 are much higher than Claude 3 Opus, but the difference in human voting scores is not that obvious.

新测试基准发布,最强开源Llama 3尴尬了

In fact, regarding this point, recent research has demonstrated that cutting-edge models will prefer their own output.

新测试基准发布,最强开源Llama 3尴尬了

The research team also found that AI can innately determine whether a piece of text was written by itself. After fine-tuning, the self-recognition ability can be enhanced, and the self-recognition ability is consistent with Self-preference is linearly related.

新测试基准发布,最强开源Llama 3尴尬了

So how will using Claude 3 for scoring change the results? LMSYS has also done relevant experiments.

First of all, the scores of the Claude series will indeed increase.

新测试基准发布,最强开源Llama 3尴尬了

But surprisingly, it prefers several open models such as Mixtral and Zero One Thousand Yi, and even scores significantly higher on GPT-3.5.

Overall, the discrimination and consistency with human results scored using Claude 3 are not as good as GPT-4.

新测试基准发布,最强开源Llama 3尴尬了

#So many netizens suggest using multiple large models for comprehensive scoring.

新测试基准发布,最强开源Llama 3尴尬了

#In addition, the team also conducted more ablation experiments to verify the effectiveness of the new benchmark test.

For example, if you add "make the answer as detailed as possible" in the prompt word, the average output length will be higher, and the score will indeed improve.

But if the prompt word is replaced with "likes to chat", the average output length is also improved, but the score improvement is not obvious.

新测试基准发布,最强开源Llama 3尴尬了

In addition, there were many interesting discoveries during the experiment.

For example, GPT-4 is very strict in scoring and will severely deduct points if there are errors in the answer; while Claude 3 will be lenient even if it recognizes small errors.

For code questions, Claude 3 tends to provide answers with a simple structure, does not rely on external code libraries, and can help humans learn programming; while GPT-4-Turbo prefers the most practical answers, regardless of their educational value. .

In addition, even if the temperature is set to 0, GPT-4-Turbo may produce slightly different judgments.

It can also be seen from the first 64 clusters in the hierarchy visualization that the quality and diversity of questions asked by users in the large model arena is indeed high.

新测试基准发布,最强开源Llama 3尴尬了

Maybe there is your contribution in this.

Arena-Hard GitHub: https://github.com/lm-sys/arena-hard
Arena-Hard HuggingFace: https://huggingface.co/spaces/lmsys/arena-hard- browser
Large model arena: https://arena.lmsys.org

Reference link:

[1]https://x.com/lmsysorg/status/1782179997622649330
[2]https://lmsys.org/blog/2024-04-19-arena-hard/

The above is the detailed content of New test benchmark released, the most powerful open source Llama 3 is embarrassed. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!