Home > Technology peripherals > AI > body text

For the first time, 'Teaching Director' is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

PHPz
Release: 2023-04-14 15:46:03
forward
1003 people have browsed it

Faced with increasingly sophisticated deep learning models and massive video big data, artificial intelligence algorithms are increasingly dependent on computing resources. In order to effectively improve the performance and efficiency of deep models, by exploring the distillability and sparsity of the model, this paper proposes a unified model compression technology based on the " Dean-Teacher-Student" model.

This result was completed by a joint research team of the People's National Science and Technology Institute of Technology and the Institute of Automation, Chinese Academy of Sciences. The relevant paper was published in the top international journal on artificial intelligence, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) on. This achievement is the first time that the role of “teaching director” has been introduced into model distillation technology, unifying the distillation and tailoring of deep models.

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

##Paper address: https://ieeexplore.ieee.org/abstract/document/9804342

At present, this achievement has been applied to the cross-modal intelligent search engine "Baize" independently developed by People's Science and Technology. "Baize" breaks the barriers of information expression between different modalities such as graphics, text, audio and video, and maps different modal information such as text, pictures, voice and video into a unified feature representation space, with video as the core, learning multiple modalities A unified distance measurement can be used to bridge the semantic gap of multi-modal content such as text, voice, and video to achieve unified search capabilities.

However, in the face of massive Internet data, especially video big data, the consumption of computing resources by cross-modal deep models is gradually increasing. Based on this research result, "Baize" can compress the model size on a large scale while ensuring algorithm performance, thereby achieving high-throughput and low-power cross-modal intelligent understanding and search capabilities. According to preliminary practical applications, this technology can compress the parameter scale of large models by more than four times on average. On the one hand, it can greatly reduce the model's consumption of high-performance computing resources such as GPU servers. On the other hand, large models that cannot be deployed on the edge can be distilled and compressed to achieve low-power deployment on the edge.

Federated learning framework for model compression

Compression and acceleration of deep algorithm models can be achieved through distillation learning or structured sparse clipping, but this There are some limitations in both areas. For the distillation learning method, it aims to train a lightweight model (i.e., student network) to simulate a complex and large model (i.e., teacher network). Under the guidance of the teacher network, the student network can achieve better performance than training alone.

However, distillation learning algorithms only focus on improving the performance of student networks and often ignore the importance of network structure. The structure of the student network is generally predefined and fixed during the training process.

For structured sparse clipping or filter clipping, these methods aim to clip a redundant and complex network into a sparse and compact network. However, model cropping is only used to obtain a compact structure. None of the existing methods make full use of the "knowledge" contained in the original complex model. Recent research combines distillation learning with structured sparse pruning in order to balance model performance and size. But these methods are limited to simple combinations of loss functions.

In order to analyze the above issues in depth, this study first trained the model based on compressed sensing. By analyzing the model performance and structure, it was found that there are two important attributes for deep algorithm models: distillability Distillability and sparsability.

Specifically, distillability refers to the density of effective knowledge that can be distilled from the teacher network. It can be measured by the performance gains achieved by a student network under the guidance of a teacher network. For example, student networks with higher distillability can achieve higher performance. Distillability can also be quantitatively analyzed at the network layer level.

As shown in Figure 1-(a), the bar graph represents the cosine similarity (Cosine Similarity) between the distillation learning loss gradient and the true value classification loss gradient. A larger cosine similarity indicates that the knowledge of the current distillation is more helpful for model performance. In this way, cosine similarity can also be a measure of distillability. It can be seen from Figure 1-(a) that the distillability gradually increases as the number of model layers becomes deeper. This also explains why supervision commonly used in distillation learning is applied in the last few layers of the model. Moreover, in different training rounds, the student model also has different distillability, because the cosine similarity also changes as the training time changes. Therefore, it is necessary to dynamically analyze the distillability of different layers during the training process.

On the other hand, sparsity refers to the cropping rate (or compression rate) that the model can obtain under limited accuracy loss. Higher sparsability corresponds to the potential for higher cropping rates. As shown in Figure 1-(b), different layers or modules of the network exhibit different sparsibility. Similar to distillability, sparsibility can also be analyzed at the network layer level and in the time dimension. However, there are currently no methods to explore and analyze distillability and rarefaction. Existing methods often use a fixed training mechanism, which makes it difficult to achieve an optimal result.

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

Figure 1 Schematic diagram of distillability and sparsity of deep neural networks

In order to solve the above problems, this study analyzes the training process of model compression to obtain relevant findings about distillability and sparsability. Inspired by these findings, this study proposes a model compression method based on joint learning of dynamic distillability and sparsity. It can dynamically combine distillation learning and structured sparse clipping, and adaptively adjust the joint training mechanism by learning distillability and sparsity.

Different from the conventional "Teacher-Student" framework, the method proposed in this article can be described as a "Learning-in-School" framework. Because it contains three major modules: teacher network, student network and dean network.

Specifically, the same as before, the teacher network teaches the student network. The teaching director network is responsible for controlling the intensity of students' online learning and the way they learn. By obtaining the status of the current teacher network and student network, the dean network can evaluate the distillability and sparsibility of the current student network, and then dynamically balance and control the strength of distillation learning supervision and structured sparse clipping supervision.

In order to optimize the method in this article, this research also proposes a joint optimization algorithm of distillation learning & tailoring based on the alternating direction multiplier method to update the student network. In order to optimize and update the teaching director network, this paper proposes a teaching director optimization algorithm based on meta-learning. Distillability can in turn be influenced by dynamically adjusting the supervision signal. As shown in Figure 1-(a), the method in this paper proves to be able to delay the downward trend of distillability and improve the overall distillability by rationally utilizing the knowledge of distillation.

The overall algorithm framework and flow chart of this article’s method are shown in the figure below. The framework contains three major modules, teacher network, student network and dean network. Among them, the initial complex redundant network to be compressed and trimmed is regarded as the teacher network, and in the subsequent training process, the original network that is gradually sparse is regarded as the student network. The dean network is a meta-network that inputs the information of the teacher network and the student network to measure the current distillability and sparsity, thereby controlling the supervision intensity of distillation learning and sparseness.

In this way, at every moment, the student network can be guided and sparsified by dynamically distilled knowledge. For example, when the student network has a higher distillability, the dean will let a stronger distillation supervision signal guide the student network (see the pink arrow signal in Figure 2); on the contrary, when the student network has a higher sparseness Therefore, the dean will exert a stronger sparse supervision signal on the student network (see the orange arrow signal in Figure 2).

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

Figure 2 Schematic diagram of model compression algorithm based on joint learning of distillability and sparsity

Experimental results

The experiment compares the method proposed in this article with 24 mainstream model compression methods (including sparse clipping methods and distillation learning methods) on the small-scale data set CIFAR and the large-scale data set ImageNet. The experimental results are shown in the figure below, which prove the superiority of the method proposed in this article.

Table 1 Performance comparison of model cropping results on CIFAR10:

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

Table 2 on ImageNet Performance comparison of model cropping results:

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

For more research details, please refer to the original paper.

The above is the detailed content of For the first time, 'Teaching Director' is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template