Home > Technology peripherals > AI > Are the benchmarks for scoring large models reliable? Anthropic comes for the next big evaluation

Are the benchmarks for scoring large models reliable? Anthropic comes for the next big evaluation

PHPz
Release: 2023-11-06 12:13:08
forward
1187 people have browsed it

In the current era of the prevalence of large models (LLM), evaluating AI systems has become an important part. What difficulties will be encountered during the evaluation process? An article in Anthropic reveals the answer for us. .

At this stage, most discussions surrounding the impact of artificial intelligence (AI) on society can be attributed to certain attributes of AI systems, such as authenticity, fairness, possibility of abuse, etc. . But the problem now is that many researchers don't fully realize how difficult it is to build robust and reliable model evaluations. Many of today's existing evaluation kits are limited in performance in various aspects.

AI startup Anthropic recently posted an article "Challenges in Evaluating AI Systems" on its official website. The article writes that they spent a long time building an evaluation of the AI ​​system to better understand the AI ​​system.

Are the benchmarks for scoring large models reliable? Anthropic comes for the next big evaluation

Article address: https://www.anthropic.com/index/evaluating-ai-systems

This article mainly discusses the following aspects :

  • Multiple choice evaluations;

  • Use third-party evaluation frameworks such as BIG-bench and HELM;

  • Let staff measure whether the model is beneficial or harmful;

  • Let domain experts conduct red team analysis (red team) of relevant threats;

  • Use generative AI to develop assessment methods;

  • Work with nonprofits to review models for harm.

Challenges of Multiple Choice Assessment

Multiple choice assessment may seem simple, but it is not. This article discusses the challenges of the model on the MMLU (Measuring Multitask Language Understanding) and BBQ (Bias Benchmark for QA) benchmarks.

MMLU data set

MMLU is an English evaluation data set containing 57 multiple-choice question and answer tasks, covering mathematics, history, law, etc., and is currently the mainstream LLM Evaluation data set. The higher the accuracy, the stronger the model's ability. However, this article found that there are four challenges in using MMLU:

1. Since MMLU is widely used, this situation is inevitable, and it is easier for the model to incorporate MMLU data during the training process. It’s the same as when students see questions before taking a test – it’s cheating.

2. Be sensitive to simple formatting changes, such as changing an option from (A) to (1), or adding extra spaces between options and answers, which may result in estimated accuracy of approximately 5% float.

3. Some developers have targeted ways to improve MMLU scores, such as few-shot learning or thought chain reasoning. Therefore, great care must be taken when comparing MMLU scores across laboratories.

4.MMLU may not have been carefully proofread - some researchers found examples of label errors or unanswerable questions in MMLU.

Due to the above problems, it is necessary to make judgment and thinking in advance when conducting this simple and standardized assessment. This article demonstrates that the challenges encountered in using MMLU generally apply to other similar multiple-choice assessments.

BBQ

Multiple-choice assessments can also measure some AI hazards. Specifically, to measure these hazards in their own model, Claude, researchers at Anthropic used the BBQ benchmark, a common benchmark used to assess model bias against populations. After comparing this benchmark to several similar assessments, this article is convinced that BBQ provides a good measure of social bias. The work took them several months.

This article indicates that implementing BBQ is more difficult than expected. The first was that a working open source implementation of BBQ could not be found, and it took Anthropic's best engineers a week to perform and test the evaluation. Unlike in MMLU, which is evaluated in terms of accuracy, bias scores in BBQ require nuance and experience to define, calculate, and interpret.

BBQ bias scores range from -1 to 1, where 1 indicates significant stereotype bias, 0 indicates no bias, and -1 indicates significant counter-stereotype bias. After implementing BBQ, this paper found that some models had a bias score of 0. This result also makes the researchers optimistic, indicating that they have made progress in reducing biased model output.

Third Party Assessment Framework

Recently, third parties have been actively developing assessment suites. So far, Anthropic has participated in two of these projects: BIG-bench and Stanford University’s HELM (Holistic Evaluation of Language Models). While third-party assessments appear useful, both projects face new challenges.

BIG-bench

BIG-bench consists of 204 assessments, collaboratively completed by more than 450 researchers, covering a range of topics from science to social reasoning. Anthropic said they encountered some challenges when using this benchmark: In order to install BIG-bench, they spent a lot of time. BIG-bench isn't as plug-and-play as MMLU - it's even more effort to implement than using BBQ.

BIG-bench does not scale effectively, and it is very challenging to complete all 204 evaluations. Therefore, it needs to be rewritten to work well with the infrastructure used, which is a huge workload.

In addition, during the implementation process, this article found that there were some bugs in the evaluation, which were very inconvenient to use, so Anthropic researchers abandoned it after this experiment.

HELM: Planning a set of assessments from top to bottom

BIG-bench is a "bottom-up" work, anyone can submit any task , followed by a limited review by a team of expert organizers. HELM adopts a "top-down" approach, with experts deciding what tasks to use to evaluate the model.

Specifically, HELM evaluates the model in multiple scenarios such as inference scenarios and scenarios containing false information, using standard indicators such as accuracy, robustness, and fairness. Anthropic provides HELM developers with API access to run benchmarks on their models.

Compared to BIG-bench, HELM has two advantages: 1) it does not require extensive engineering work, and 2) experts can be relied on to select and interpret specific high-quality assessments.

However, HELM also brings some challenges. Methods that work for evaluating other models may not necessarily work for Anthropic's models, and vice versa. For example, Anthropic's Claude family of models are trained to follow a specific text format called the Human/Assistant format. Anthropic follows this specific format internally when evaluating its models. If this format is not followed, Claude will sometimes give unusual answers, making the results of the standard assessment metrics less credible.

Additionally, HELM takes a long time to complete, and evaluating new models can take months and requires coordination and communication with external parties.

Artificial intelligence systems are designed for open and dynamic interaction with people, so how to evaluate the model closer to real-life applications?

Crowdsourcers for A/B testing

Currently, the field relies primarily (but not exclusively) on one basic type of human evaluation - on crowdsourcing platforms A/B testing is performed on the model, where people have an open dialogue with two models and choose whether the response is more helpful or harmless from model A or B, ranking the models according to their usefulness or harmlessness. The advantage of this evaluation method is that it corresponds to real-life environments and allows different models to be ranked.

However, this evaluation method has some limitations, and experiments are expensive and time-consuming to run.

First, this approach requires partnering with and paying for a third-party crowdsourcing platform, building a custom web interface for the model, designing detailed instructions for A/B testers, and analyzing and storing the resulting data , and address the ethical challenges posed by hiring crowdsourcers.

In the case of harmless testing, experiments also carry the risk of exposing people to harmful output. The results of human evaluations may also vary significantly based on the characteristics of the human evaluator, including the human evaluator's level of creativity, motivation, and ability to identify potential flaws in the system being tested.

Furthermore, there is an inherent tension between usefulness and harmlessness. The system can make it less harmful by providing unhelpful responses such as "Sorry, I can't help you."

What is the right balance between useful and harmless? What indicator value indicates that the model is useful and harmless enough? Many questions require researchers in the field to do more work to find answers.

For more information, please refer to the original article.

Original link: https://www.anthropic.com/index/evaluating-ai-systems

The above is the detailed content of Are the benchmarks for scoring large models reliable? Anthropic comes for the next big evaluation. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template