Home > Technology peripherals > AI > Analyzing the interpretability of large models: a review reveals the truth and answers doubts

Analyzing the interpretability of large models: a review reveals the truth and answers doubts

王林
Release: 2023-09-29 18:53:05
forward
1255 people have browsed it
Large-scale language models show surprising reasoning capabilities in natural language processing, but their underlying mechanisms are not yet clear. With the widespread application of large-scale language models, elucidating the operating mechanisms of the models is critical to application security, performance limitations, and controllable social impacts.

Recently, many research institutions in China and the United States (New Jersey Institute of Technology, Johns Hopkins University, Wake Forest University, University of Georgia, Shanghai Jiao Tong University, Baidu, etc. ) jointly released a review of large model interpretability technologies, which comprehensively reviewed the interpretability technologies of traditional fine-tuning models and very large models based on prompting, and discussed the evaluation standards and future research of model explanations. challenge.

Analyzing the interpretability of large models: a review reveals the truth and answers doubts

  • Paper link: https://arxiv.org/abs/2309.01029
  • Github link: https://github.com/hy-zhao23/Explainability-for-Large-Language-Models

Analyzing the interpretability of large models: a review reveals the truth and answers doubts

The difficulty of explaining large models where?

Why is it difficult to explain large models? The amazing performance of large language models on natural language processing tasks has attracted widespread attention from society. At the same time, how to explain the stunning performance of large models across tasks is one of the pressing challenges facing academia. Different from traditional machine learning or deep learning models, the ultra-large model architecture and massive learning materials enable large models to have powerful reasoning and generalization capabilities. Several major difficulties in providing interpretability for large language models (LLMs) include:

  • High model complexity. Different from deep learning models or traditional statistical machine learning models before the LLM era, LLMs models are huge in scale and contain billions of parameters. Their internal representation and reasoning processes are very complex, and it is difficult to explain their specific outputs.
  • Strong data dependence. LLMs rely on large-scale text corpus during the training process. Bias, errors, etc. in these training data may affect the model, but it is difficult to completely judge the impact of the quality of the training data on the model.
  • Black box nature. We usually think of LLMs as black box models, even for open source models such as Llama-2. It is difficult for us to explicitly judge its internal reasoning chain and decision-making process. We can only analyze it based on input and output, which makes interpretability difficult.
  • Output uncertainty. The output of LLMs is often uncertain, and different outputs may be produced for the same input, which also increases the difficulty of interpretability.
  • Insufficient evaluation indicators. The current automatic evaluation indicators of dialogue systems are not enough to fully reflect the interpretability of the model, and more evaluation indicators that consider human understanding are needed.

Training paradigm for large models

##In order to better summarize the interpretability of large models, we divide the training paradigms of large models at BERT and above levels into two types: 1) traditional fine-tuning paradigm; 2) prompting-based paradigm.

Traditional fine-tuning paradigm

For traditional fine The -tuning paradigm first pre-trains a basic language model on a larger unlabeled text library, and then fine-tunes it through labeled data sets from a specific domain. Common such models include BERT, RoBERTa, ELECTRA, DeBERTa, etc.

Prompting-based paradigm

Prompting-based paradigm Implement zero-shot or few-shot learning by using prompts. As with the traditional fine-tuning paradigm, the base model needs to be pre-trained. However, fine-tuning based on prompting paradigm is usually implemented by instruction tuning and reinforcement learning from human feedback (RLHF). Common such models include GPT-3.5, GPT 4, Claude, LLaMA-2-Chat, Alpaca, Vicuna, etc. The training process is as follows:

Analyzing the interpretability of large models: a review reveals the truth and answers doubts

Model interpretation based on the traditional fine-tuning paradigm

Model interpretation based on the traditional fine-tuning paradigm includes the interpretation of individual predictions (local explanation) and explanation of model structural level components such as neurons, network layers, etc. (global explanation).

Local explanation

Local explanation predicts a single sample Explain. Its explanation methods include feature attribution, attention-based explanation, example-based explanation, and natural language explanation.

Analyzing the interpretability of large models: a review reveals the truth and answers doubts

#1. The purpose of feature attribution is to measure the correlation between each input feature (e.g. word, phrase, text range) and the model prediction. Feature attribution methods can be divided into:

  • Perturbation-based interpretation, observing the impact on the output results by modifying specific input features

  • According to the interpretation of the gradient, the partial differential of the output to the input is used as the importance index of the corresponding input

  • Alternative model, use a simple human-understandable model to fit a single component of the complex model Output to obtain the importance of each input;

  • Decomposition-based technology aims to linearly decompose feature correlation scores.

2. Attention-based explanation: Attention is often used as a way to focus on the most relevant parts of the input, so attention may learn relevant information that can be used to explain predictions. Common attention-related explanation methods include:

  • Attention visualization technology to intuitively observe changes in attention scores on different scales;
  • Function-based interpretation, such as outputting the partial differential of attention. However, the use of attention as a research perspective remains controversial in the academic community.

3. Sample-based explanation detects and explains the model from the perspective of individual cases, which is mainly divided into: adversarial samples and counterfactual samples.

  • Adversarial examples are data generated for the characteristics of the model that are very sensitive to small changes. In natural language processing, they are usually obtained by modifying the text, which is difficult for humans to Different text transformations often lead to different predictions from the model.
  • Counterfactual samples are obtained by deforming the text such as negation, which is usually a test of the model's causal inference ability.

#4. Natural language explanation uses original text and manually labeled explanations for model training, so that the model can generate a natural language explanation of the decision-making process of the model.

Global interpretation

##Global interpretation is intended to derive from the model Constitutive levels include neurons, hidden layers, and larger chunks, providing higher-order explanations of how large models work. It mainly explores the semantic knowledge learned in different network components.

  • Probe-based interpretation Probe interpretation technology is mainly based on classifier detection, by training a shallow layer on a pre-trained model or a fine-tuned model The classifier is then evaluated on a holdout dataset, enabling the classifier to identify language features or reasoning abilities.
  • Neuron activation Traditional neuron activation analysis only considers a part of important neurons, and then learns the relationship between neurons and semantic features. Recently, GPT-4 has also been used to explain neurons. Instead of selecting some neurons for explanation, GPT-4 can be used to explain all neurons.
  • Concept-based interpretation The input is first mapped to a set of concepts and then the model is interpreted by measuring the importance of the concepts to the prediction.

Model explanation based on prompting paradigm

Model explanation based on the prompting paradigm requires separate explanations of the basic model and the assistant model to distinguish the capabilities of the two models and explore the path of model learning. The issues explored mainly include: the benefits of providing explanations for the model on few-shot learning; understanding the source of few-shot learning and thinking chain capabilities.

Basic model explanation

    # #The benefits of explanation for model learning Explore whether explanation is helpful for model learning in the case of few-shot learning.
  • Situational Learning Explore the mechanism of situational learning in large models, and distinguish the difference between situational learning in large models and medium models.
  • Thinking chain prompting Explore the thinking chain prompting to improve the performance of the model.

Assistant model explanation

    Fine-tuning's role assistant model is usually pre-trained to obtain general semantic knowledge, and then acquires domain knowledge through supervised learning and reinforcement learning. The stage at which the knowledge of the assistant model mainly comes from remains to be studied.
  • Illusion and Uncertainty The accuracy and credibility of large model predictions are still important topics of current research. Despite the powerful inference capabilities of large models, their results often suffer from misinformation and hallucinations. This uncertainty in prediction brings huge challenges to its widespread application.

Evaluation of model interpretation

##The evaluation indicators explained by the model include plausibility, faithfulness, stability, robustness, etc. The paper mainly talks about two widely concerned dimensions: 1) rationality to humans; 2) fidelity to the internal logic of the model.
The evaluation of traditional fine-tuning model explanations has mainly focused on local explanations. Plausibility often requires a measurement evaluation of model interpretations versus human-annotated interpretations against designed standards. Fidelity pays more attention to the performance of quantitative indicators. Since different indicators focus on different aspects of the model or data, there is still a lack of unified standards for measuring fidelity. Evaluation based on prompting model interpretation requires further research.

Future Research Challenges

1. Lack of effective correct explanation. The challenge comes from two aspects: 1) the lack of standards for designing effective explanations; 2) the lack of effective explanations leads to a lack of support for the evaluation of explanations.

##2. The origin of the emergence phenomenon is unknown. The exploration of the emergence ability of large models can be carried out from the perspective of the model and the data respectively. From the perspective of the model, 1) the model structure that causes the emergence phenomenon; 2) the minimum model scale and complexity with super performance in cross-language tasks . From a data perspective, 1) the subset of data that determines a specific prediction; 2) the relationship between emergent ability and model training and data contamination; 3) the impact of the quality and quantity of training data on the respective effects of pre-training and fine-tuning.

3. The difference between fine-tuning paradigm and prompting paradigm. The different performances of in-distribution and out-of-distribution mean different reasoning methods. 1) The differences in reasoning paradigms when data are in-distribution; 2) The sources of differences in model robustness when data are distributed differently.

#4. Shortcut learning problem for large models. Under the two paradigms, the problem of shortcut learning of the model exists in different aspects. Although large models have abundant data sources, the problem of shortcut learning is relatively alleviated. Elucidating the formation mechanism of shortcut learning and proposing solutions are still important for the generalization of the model.

#5. Attention redundancy. The redundancy problem of attention modules widely exists in both paradigms. The study of attention redundancy can provide a solution for model compression technology.

# 6. Safety and ethics. Interpretability of large models is critical to controlling the model and limiting the negative impact of the model. Such as bias, unfairness, information pollution, social manipulation and other issues. Building explainable AI models can effectively avoid the above problems and form ethical artificial intelligence systems.

The above is the detailed content of Analyzing the interpretability of large models: a review reveals the truth and answers doubts. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template