


How to improve model efficiency with limited resources? An article summarizing efficient NLP methods
Training increasingly larger deep learning models has become an emerging trend in the past decade. As shown in the figure below, the continuous increase in the number of model parameters makes the performance of neural networks better and better, and also generates some new research directions, but there are also more and more problems with the model.
#First of all, this type of model often has limited access and is not open source, or even if it is open source, it still requires a lot of computing resources to run. Second, the parameters of these network models are not universal, so a large amount of resources are required for training and derivation. Third, the model cannot be expanded indefinitely because the size of the parameters is limited by the hardware. To address these issues, a new research trend focusing on improving efficiency is emerging.
Recently, more than a dozen researchers from Hebrew University, University of Washington and other institutions jointly wrote a review summarizing efficient methods in the field of natural language processing (NLP).
Paper address: https://arxiv.org/pdf/2209.00099.pdf
Efficiency usually refers to the input system The relationship between resources and system output. An efficient system can produce output without wasting resources. In the field of NLP, we think of efficiency as the relationship between the cost of a model and the results it produces.
Equation (1) describes the training cost (Cost) of an artificial intelligence model to produce a certain result (R) and three (incomplete) factors Proportional to:
(1) The cost of executing the model on a single sample (E);
(2) The size of the training data set (D);
(3) Number of training runs (H) required for model selection or parameter adjustment.
Cost(·) can then be measured along multiple dimensions, as each of computational, time or environmental costs can be further quantified in a variety of ways. For example, the computational cost can include the total number of floating point operations (FLOPs) or the number of model parameters. Since using a single cost metric can be misleading, this study collects and organizes work on multiple aspects of efficient NLP and discusses which aspects are beneficial for which use cases.
This study aims to give a basic introduction to a wide range of methods to improve NLP efficiency. Therefore, this study organizes this survey according to the typical NLP model pipeline (Figure 2 below) and introduces how to make each stage More efficient existing methods.
This work provides a practical efficiency guide for NLP researchers, mainly for two types of readers:
(1 ) researchers from various fields of NLP to help them work in resource-limited environments: depending on resource bottlenecks, readers can jump directly to an aspect covered by the NLP pipeline. For example, if the main limitation is inference time, Chapter 6 of the paper describes related efficiency improvements.
(2) Researchers interested in improving the current efficiency of NLP methods. This paper can serve as an entry point to identify opportunities for new research directions.
Figure 3 below outlines the efficient NLP method summarized in this study.
In addition, although the choice of hardware has a large impact on the efficiency of the model, most NLP researchers do not directly control decisions about hardware, and Most hardware optimizations are useful for all stages in the NLP pipeline. Therefore, this study focuses the work on algorithms but provides a brief introduction to hardware optimization in Chapter 7. Finally, the paper further discusses how to quantify efficiency, what factors should be considered during the evaluation process, and how to decide on the most appropriate model.
Interested readers can read the original text of the paper to learn more research details.
The above is the detailed content of How to improve model efficiency with limited resources? An article summarizing efficient NLP methods. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

FP8 and lower floating point quantification precision are no longer the "patent" of H100! Lao Huang wanted everyone to use INT8/INT4, and the Microsoft DeepSpeed team started running FP6 on A100 without official support from NVIDIA. Test results show that the new method TC-FPx's FP6 quantization on A100 is close to or occasionally faster than INT4, and has higher accuracy than the latter. On top of this, there is also end-to-end large model support, which has been open sourced and integrated into deep learning inference frameworks such as DeepSpeed. This result also has an immediate effect on accelerating large models - under this framework, using a single card to run Llama, the throughput is 2.65 times higher than that of dual cards. one
