Table of Contents
When two different large models produce different responses to the same instructions and context, PandaLM is designed to compare the two large models. The response quality of the model, and output comparison results, comparison reasons, and responses for reference.
Features of PandaLM
Summary
Home Technology peripherals AI PandaLM, an open-source 'referee large model' from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

PandaLM, an open-source 'referee large model' from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

May 19, 2023 am 11:55 AM
Model Open source

After the release of ChatGPT, the ecosystem in the field of natural language processing has completely changed. Many problems that could not be solved before can be solved using ChatGPT.

However, it also brings a problem: the performance of large models is too strong, and it is difficult to evaluate the differences of each model with the naked eye.

For example, if several versions of the model are trained with different base models and hyperparameters, the performance may be similar from the examples, and the performance gap between the two models cannot be fully quantified.

Currentlythere are two main options for evaluating large language models:

1. Call OpenAI’s API interface for evaluation.

ChatGPT can be used to evaluate the quality of the output of two models, but ChatGPT has been iteratively upgraded. The responses to the same question at different times may be different, and the evaluation results exist

Cannot reproduce problem.

2. Manual annotation

If you ask for manual annotation on the crowdsourcing platform,

the team with insufficient funds may Unable to afford it, there are also cases where third-party companiesleak data.

In order to solve such "large model evaluation problems", researchers from Peking University, Westlake University, North Carolina State University, Carnegie Mellon University, and MSRA collaborated to develop a PandaLM, a new language model evaluation framework, is committed to realizing a privacy-preserving, reliable, reproducible and cheap large model evaluation solution.

PandaLM, an open-source referee large model from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

## Project link: https://github.com/WeOpenML/PandaLM

Provided with the same context, PandaLM can compare the response output of different LLMs and provide specific reasons.

To demonstrate the tool’s reliability and consistency, the researchers created a diverse human-annotated test dataset consisting of approximately 1,000 samples, in which PandaLM-7B’s accurate The rate reached 94% of

ChatGPT’s evaluation capability

. Three lines of code using PandaLM

When two different large models produce different responses to the same instructions and context, PandaLM is designed to compare the two large models. The response quality of the model, and output comparison results, comparison reasons, and responses for reference.

There are three comparison results: response 1 is better, response 2 is better, and response 1 and response 2 have similar quality.

When comparing the performance of multiple large models, you only need to use PandaLM to compare them in pairs, and then summarize the results of the pairwise comparisons to rank or draw the performance of multiple large models. The model partial order relationship diagram can clearly and intuitively analyze the performance differences between different models.

PandaLM only needs to be "locally deployed" and "does not require human participation", so PandaLM's evaluation can protect privacy and is quite cheap.

In order to provide better interpretability, PandaLM can also explain its selections in natural language and generate an additional set of reference responses.

PandaLM, an open-source referee large model from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

In the project, researchers not only support using PandaLM using the Web UI for case analysis, but also for ease of use, It also supports three lines of code to call PandaLM for text evaluation generated by arbitrary models and data.

Considering that many existing models and frameworks are not open source or difficult to complete inference locally, PandaLM supports using specified model weights to generate text to be evaluated, or directly passing in a .json file containing the text to be evaluated.

Users can use PandaLM to evaluate user-defined models and input data by simply passing in a list containing the model name/HuggingFace model ID or .json file path. The following is a minimalist usage example:

PandaLM, an open-source referee large model from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

In order to allow everyone to use PandaLM flexibly for free evaluation, researchers The model weights of PandaLM have also been published on the huggingface website. You can load the PandaLM-7B model through the following command:

PandaLM, an open-source referee large model from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

Features of PandaLM

Reproducibility

Because the weights of PandaLM are public, even the output of the language model There is randomness. When the random seed is fixed, the evaluation results of PandaLM can always remain consistent.

The update of the model based on the online API is not transparent, its output may be very inconsistent at different times, and the old version of the model is no longer accessible, so the evaluation based on the online API is often not accessible. Reproducibility.

Automation, privacy protection and low overhead

Just deploy the PandaLM model locally and call the ready-made commands You can start to evaluate various large models without having to keep in constant communication with experts like hiring experts for annotation. There will also be no data leakage issues. At the same time, it does not involve any API fees or labor costs, making it very cheap.

Evaluation Level

To prove the reliability of PandaLM, the researchers hired three experts to conduct independent repeated annotations , a manually annotated test set was created.

The test set contains 50 different scenarios, and each scenario contains several tasks. This test set is diverse, reliable, and consistent with human preferences for text. Each sample of the test set consists of an instruction and context, and two responses generated by different large models, and the quality of the two responses is compared by humans.

Screen out samples with large differences between annotators to ensure that each annotator's IAA (Inter Annotator Agreement) on the final test set is close to 0.85. It is worth noting that the training set of PandaLM does not have any overlap with the manually annotated test set created.

PandaLM, an open-source referee large model from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

These filtered samples require additional knowledge or difficult-to-obtain information to assist judgment, which makes it difficult for humans to Label them accurately.

The filtered test set contains 1000 samples, while the original unfiltered test set contains 2500 samples. The distribution of the test set is {0:105, 1:422, 2:472}, where 0 indicates that the two responses are of similar quality, 1 indicates that response 1 is better, and 2 indicates that response 2 is better. Taking the human test set as the benchmark, the performance comparison of PandaLM and gpt-3.5-turbo is as follows:

PandaLM, an open-source referee large model from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

It can be seen that PandaLM-7B is in accuracy It has reached the level of 94% of gpt-3.5-turbo, and in terms of precision, recall, and F1 score, PandaLM-7B is almost the same as gpt-3.5-turbo.

Therefore, compared with gpt-3.5-turbo, it can be considered that PandaLM-7B already has considerable large model evaluation capabilities.

In addition to the accuracy, precision, recall, and F1 score on the test set, the results of comparisons between 5 large open source models of similar size are also provided.

First used the same training data to fine-tune the five models, and then used humans, gpt-3.5-turbo, and PandaLM to compare the five models separately.

The first tuple (72, 28, 11) in the first row of the table below indicates that there are 72 LLaMA-7B responses that are better than Bloom-7B, and there are 28 LLaMA The response of -7B is worse than that of Bloom-7B, with 11 responses of similar quality between the two models.

PandaLM, an open-source referee large model from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

So in this example, humans think LLaMA-7B is better than Bloom-7B. The results in the following three tables show that humans, gpt-3.5-turbo and PandaLM-7B have completely consistent judgments on the relationship between the pros and cons of each model.

PandaLM, an open-source referee large model from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT

Summary

PandaLM provides a third article in addition to human evaluation and OpenAI API evaluation For solutions to evaluate large models, PandaLM not only has a high evaluation level, but also has reproducible evaluation results, automated evaluation processes, privacy protection and low overhead.

In the future, PandaLM will promote research on large models in academia and industry, so that more people can benefit from the development of large models.

The above is the detailed content of PandaLM, an open-source 'referee large model' from Peking University, West Lake University and others: three lines of code to fully automatically evaluate LLM, with an accuracy of 94% of ChatGPT. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Apr 01, 2024 pm 07:46 PM

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Recommended: Excellent JS open source face detection and recognition project Recommended: Excellent JS open source face detection and recognition project Apr 03, 2024 am 11:55 AM

Face detection and recognition technology is already a relatively mature and widely used technology. Currently, the most widely used Internet application language is JS. Implementing face detection and recognition on the Web front-end has advantages and disadvantages compared to back-end face recognition. Advantages include reducing network interaction and real-time recognition, which greatly shortens user waiting time and improves user experience; disadvantages include: being limited by model size, the accuracy is also limited. How to use js to implement face detection on the web? In order to implement face recognition on the Web, you need to be familiar with related programming languages ​​and technologies, such as JavaScript, HTML, CSS, WebRTC, etc. At the same time, you also need to master relevant computer vision and artificial intelligence technologies. It is worth noting that due to the design of the Web side

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

FisheyeDetNet: the first target detection algorithm based on fisheye camera FisheyeDetNet: the first target detection algorithm based on fisheye camera Apr 26, 2024 am 11:37 AM

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

See all articles