DeepMind: Who said convolutional networks are inferior to ViT?
This paper evaluates scaled-up NFNets and challenges the idea that ConvNets perform worse than ViTs on large-scale problems
The early success of deep learning can be attributed to the development of convolutional neural networks (ConvNets). ConvNets have dominated computer vision benchmarks for nearly a decade. In recent years, however, they have been increasingly replaced by ViTs (Vision Transformers).
Many people believe that ConvNets perform well on small or medium-sized data sets, but cannot compete with ViTs on larger network-sized data sets.
At the same time, the CV community has moved from evaluating the performance of randomly initialized networks on specific datasets (such as ImageNet) to evaluating the performance of networks pretrained on large general datasets collected from the network. This leads to an important question: do Vision Transformers outperform pre-trained ConvNets architectures under similar computational budgets?
In this article, researchers from Google DeepMind studied this problem. They obtained performance similar to ViTs on ImageNet by pre-training multiple NFNet models on the JFT-4B dataset of different scales

Paper link address: https://arxiv.org/pdf/2310.16764.pdf
The research in this paper discusses the situation of pre-training computing budget between 0.4k and 110k TPU-v4 core computing hours, and taking advantage of increasing the NFNet model family depth and width to perform a series of network training. Research has found that there is a log-log expansion rate (scaling law) between held out loss and computing budget
For example, this article will be based on JFT-4B, running on TPU-v4 core hours (core hours) scaled from 0.4k to 110k and pre-trained on NFNet. After fine-tuning, the largest model achieved 90.4% accuracy on ImageNet Top-1, competing with the pre-trained ViT model under the same computational budget

It can be said , this paper challenges the notion that ConvNets perform worse than ViTs on large-scale datasets by evaluating scaled-up NFNets. Furthermore, given sufficient data and computation, ConvNets remain competitive, and model design and resources are more important than architecture.
After seeing this research, Turing Award winner Yann LeCun said: "Under a given amount of calculation, ViT and ConvNets are computationally equivalent. Although ViTs has achieved impressive results in computer vision, Impressive success, but in my opinion there is no strong evidence that pre-trained ViT outperforms pre-trained ConvNets when evaluated fairly."

However, some netizens commented on LeCun's comments that he believed that using ViT in multi-modal models may still give it an advantage in research
Researchers from Google DeepMind said that ConvNets will never disappear

Next let’s look at the specific content of the paper.
Pre-trained NFNets follow the expansion law
This article trained a series of NFNet models of different depths and widths on JFT-4B.
As shown in Figure 2, the validation loss is linearly related to the computational budget of the training model, which is consistent with the log-log expansion law observed when using Transformer for language modeling. As the computational budget increases, the optimal model size and optimal epoch budget (to achieve the lowest validation loss) also increase

In the chart below, we can see The optimal learning rate (i.e., minimizing validation loss) observed for the three models over a range of epoch budgets. The researchers found that for lower epoch budgets, the NFNet family of models all showed similar optimal learning rates, around 1.6. However, the optimal learning rate decreases as the epoch budget increases, and decreases faster for larger models. The researchers say that it can be assumed that the optimal learning rate decreases slowly and monotonically with increasing model size and epoch budget, so the learning rate can be effectively adjusted between trials

What needs to be rewritten is: It should be noted that some of the pre-trained models in Figure 2 did not perform as expected. The research team believes that the reason for this situation is that if the training run is preempted/restarted, the data loading process cannot guarantee that each training sample can be sampled once in each epoch. If the training run is restarted multiple times, it may result in some training samples being undersampled
NFNet vs ViT
Experiments on ImageNet show that after fine-tuning The performance of NFNet and Vision Transformer is quite
Specifically, this study fine-tuned the pre-trained NFNet on ImageNet and plotted the relationship between pre-training calculation and Top-1 error, as shown in Figure 1 above.
As budget increases, ImageNet Top-1 accuracy continues to improve. Among them, the most expensive pre-trained model is NFNet-F7, which is pre-trained for 8 epochs and has an accuracy of 90.3% in ImageNet Top-1. Pretraining and fine-tuning require approximately 110k TPU-v4 core hours and 1.6k TPU-v4 core hours. Furthermore, if additional repetitive enhancement techniques are introduced during fine-tuning, a Top-1 accuracy of 90.4% can be achieved. NFNet benefits greatly from large-scale pre-training
Despite the obvious differences between the two model architectures NFNet and ViT, pre-trained NFNet and pre-trained ViT are comparable in performance . For example, after pre-training JFT-3B with 210k TPU-v3 core hours, ViT-g/14 achieved a Top-1 accuracy of 90.2% on ImageNet; while training JFT-3B with more than 500k TPU-v3 After core hours of pre-training, ViT-G/14 achieved a Top-1 accuracy of 90.45%
This article evaluates the pre-training speed of these models on TPU-v4 and estimates ViT-g/14 120k TPU-v4 core hours are required to pre-train, while ViTG/14 will require 280k TPU-v4 core hours, and SoViT-400m/14 will require 130k TPU-v4 core hours. This paper uses these estimates to compare the pre-training efficiency of ViT and NFNet in Figure 1. The study noted that NFNet is optimized for TPU-v4 and performs poorly when evaluated on other devices.
Finally, this paper notes that on JFT-4B, pre-trained checkpoints achieve the lowest validation loss, but after fine-tuning, do not always achieve the highest Top-1 accuracy on ImageNet . In particular, this paper finds that under a fixed pre-training computational budget, the fine-tuning mechanism tends to select a slightly larger model and a slightly smaller epoch budget. Intuitively, larger models have greater capacity and are therefore better able to adapt to new tasks. In some cases, a slightly larger learning rate (during pre-training) can also lead to better performance after fine-tuning
The above is the detailed content of DeepMind: Who said convolutional networks are inferior to ViT?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

If the answer given by the AI model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which
