


70B model generates 1,000 tokens in seconds, code rewriting surpasses GPT-4o, from the Cursor team, a code artifact invested by OpenAI
70B model, 1000 tokens can be generated in seconds, which translates into nearly 4000 characters!
The researchers fine-tuned Llama3 and introduced an acceleration algorithm. Compared with the native version, the speed is 13 times faster!
Not only is it fast, its performance on code rewriting tasks even surpasses GPT-4o.
This achievement comes from anysphere, the team behind the popular AI programming artifact Cursor, and OpenAI also participated in the investment.
You must know that on Groq, the famously fast inference acceleration framework, the inference speed of 70B Llama3 is only more than 300 tokens per second.
Cursor’s speed can be said to achieve near-instant complete code file editing.
Someone said, "Hey guys, if you put Llama3 after Cursor's magic modification on Groq, will you be able to generate tens of thousands of tokens per second?"
Some people are even more excited to say that in the field of large models, we are eliminating the concept of "delay".
Introducing a new inference acceleration algorithm
The acceleration method designed by the author this time is mainly used to solve a task called "Fast Apply". That is, quickly modify and apply the code content.
First of all, it needs to be explained that although the final effect of the task is a local modification of the code, during the actual operation, the output is not only the changed content, but a direct global reset. Write.
The reason for this is the choice made by the team after pre-testing - they found that, except for Claude-3-Opus, most models did not perform satisfactorily on the true partial modification task.
There are three main reasons why this is the case:
- Firstly, more tokens will be output when rewritten directly, allowing more forward passes to determine the correctness. s solution.
- Secondly, most of the model’s training data are complete codes, and are relatively unfamiliar with local modifications.
- Also, the poor math of large models does not guarantee that line numbers will be handled correctly when outputting differences.
(However, the author believes that this is still a potential future research direction.)
Determined to adopt the global After rewriting the plan, the Cursor team used task-related data to fine-tune Llama3.
The data used comes from two sources: real edited data and synthetic data, which are mixed at a ratio of 1:4.
Synthetic data refers to using GPT-4 to generate code editing suggestions, and then using other models to "apply" these suggestions to the original code.
In order to improve the quality of the data set, the author also downsampled small files, duplicate files, and unchanged samples.
To evaluate the performance of these models, the authors ran them through 450 code editing tasks (each of no more than 400 lines) and evaluated the output with Claude3-Opus. Score.
In the end, the performance of the 70B Llama3 model fine-tuned by the author almost matched that of Claude3-Opus-diff, and was better than GPT-4-Turbo and GPT-4o.
The fine-tuning so far has solved the performance problem, but it is not difficult to see that Llama3 is still very slow at this time, and can only output less than 300 characters per second( Note that it is a character, not a word or a token) .
And what makes the rewriting work so fast is another secret weapon.
For the code rewriting task, the Cursor team specially introduced an algorithm called Predictive editing (speculative edits).
This method uses an a priori algorithm to predict multiple subsequent tokens, and then uses the ontology large model for verification, which reduces the number of calls to the large model, thus reducing the amount of calculations.
This a priori algorithm comes from a feature of the coding task-compared to other texts, its vocabulary is smaller, and its grammatical structure, indentation rules, etc. have higher certainty. Using a priori knowledge can make it more precise. Accurately predict future tokens.
This approach also has something in common with GPT-4 and Meta-
The reason why traditional language model reasoning is slow is mainly because the process of predicting the next token is usually Autoregressive, that is, when the model generates each token, it must consider all previously generated tokens.
In order to reduce the amount of calculations, large models represented by GPT-4 use an acceleration algorithm called Predictive decoding (speculative decoding), through small The approximate model makes predictions in advance, and then the ontology large model verifies the prediction results.
The difference between Cursor and GPT-4 is that the former’s small “model” is a more deterministic algorithm, while the latter only reduces the size of the model and is still essentially a probabilistic prediction.
Meta has introduced an algorithm for predicting multiple subsequent tokens at once, using n independent output heads to predict n future tokens in parallel, and found that performs well on programming tasks Especially excellent , because the logical structure of the programming language is more rigorous and the internal connection of knowledge is closer.
Of course, Cursor makes full use of this feature. Instead of using attention heads, it directly uses a more certain algorithm to make multi-token predictions.
The final result is that the prediction algorithm brings a nearly 13-fold speed increase to the 70B Llama3 without any loss in evaluation performance.
In addition, the author also cooperated with the enterprise AI model infrastructure platform fireworks.ai, using its optimized inference engine and customized hardware environment to further improve the operation of the model. efficiency.
In the future, the team also plans to conduct knowledge distillation and migrate the predictive editing algorithm to the smaller 8B Llama3, and expand it to more programming languages and tasks.
At the same time, the author also plans to improve the true partial modification (Diff) algorithm that the Cursor team has studied but has not adopted.
One More Thing
In the experiment, the author not only accelerated Llama3 using the prediction algorithm, but also accelerated GPT4-Turbo.
However, the author did not introduce how to implement it in GPT. Instead, he left some thinking questions and even held a "prize-winning guessing game".
Those who can answer correctly will get a 1-month Cursor membership; if they can achieve prediction acceleration in vllm and TensorRT-LLM, they will get a half-year and one-year membership respectively.
If you feel you have an idea, you might as well try the challenge (manual dog head).
The above is the detailed content of 70B model generates 1,000 tokens in seconds, code rewriting surpasses GPT-4o, from the Cursor team, a code artifact invested by OpenAI. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



DDREASE is a tool for recovering data from file or block devices such as hard drives, SSDs, RAM disks, CDs, DVDs and USB storage devices. It copies data from one block device to another, leaving corrupted data blocks behind and moving only good data blocks. ddreasue is a powerful recovery tool that is fully automated as it does not require any interference during recovery operations. Additionally, thanks to the ddasue map file, it can be stopped and resumed at any time. Other key features of DDREASE are as follows: It does not overwrite recovered data but fills the gaps in case of iterative recovery. However, it can be truncated if the tool is instructed to do so explicitly. Recover data from multiple files or blocks to a single

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Facing lag, slow mobile data connection on iPhone? Typically, the strength of cellular internet on your phone depends on several factors such as region, cellular network type, roaming type, etc. There are some things you can do to get a faster, more reliable cellular Internet connection. Fix 1 – Force Restart iPhone Sometimes, force restarting your device just resets a lot of things, including the cellular connection. Step 1 – Just press the volume up key once and release. Next, press the Volume Down key and release it again. Step 2 – The next part of the process is to hold the button on the right side. Let the iPhone finish restarting. Enable cellular data and check network speed. Check again Fix 2 – Change data mode While 5G offers better network speeds, it works better when the signal is weaker

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

Recently, the military circle has been overwhelmed by the news: US military fighter jets can now complete fully automatic air combat using AI. Yes, just recently, the US military’s AI fighter jet was made public for the first time and the mystery was unveiled. The full name of this fighter is the Variable Stability Simulator Test Aircraft (VISTA). It was personally flown by the Secretary of the US Air Force to simulate a one-on-one air battle. On May 2, U.S. Air Force Secretary Frank Kendall took off in an X-62AVISTA at Edwards Air Force Base. Note that during the one-hour flight, all flight actions were completed autonomously by AI! Kendall said - "For the past few decades, we have been thinking about the unlimited potential of autonomous air-to-air combat, but it has always seemed out of reach." However now,
