


Without downstream training, Tip-Adapter greatly improves CLIP image classification accuracy
- Paper link: https://arxiv.org/pdf/2207.09519.pdf
- Code link: https://github.com/gaopengcuhk/Tip-Adapter
1. Research Background
The Contrastive Image Language Pre-training Model (CLIP) has recently demonstrated strong visual domain transfer capabilities and can perform zero-shot image recognition on a new downstream data set. In order to further improve the migration performance of CLIP, existing methods use few-shot settings, such as CoOp and CLIP-Adapter, which provide a small amount of training data for downstream data sets, allowing CLIP to better make decisions for different visual scenarios. Adjustment. However, this additional training step will bring considerable time and space resource overhead, which to a certain extent affects the inherent rapid knowledge transfer capability of CLIP. Therefore, we propose Tip-Adapter, a few-shot image classification method that does not require additional downstream training and can greatly improve the accuracy of CLIP. Based on this, we have proposed a solution that can achieve state-of-the-art performance with only a small amount of fine-tuning: Tip-Adapter-F, achieving the best compromise between efficiency and performance. As shown in Table 1 below, Tip-Adapter does not require any training time, that is, it can improve CLIP's accuracy in the ImageNet data set by 1.7% (Accuracy), while Tip-Adapter-F only requires one-tenth of the training time of the previous solution ( Epochs, Time), you can achieve the best existing classification performance.
Table 1: Comparison of 16-shot image classification accuracy and training time of different schemes on the ImageNet dataset
two. Research method
1.Tip-Adapter
The overall network structure of Tip-Adapter is shown in Figure 1 below. For the given few-shot training data set and labels, we use CLIP uses a non-training solution to build a cache model (Cache Model), which stores classification knowledge from downstream training data; during testing, Tip-Adapter performs a linear summation of the predictions of the Cache Model and the predictions of the original CLIP , to obtain stronger final classification results.
In detail, we use CLIP pre-trained visual encoder (Visual Encoder) to extract the features of all images in the few-shot training set as the Keys of the Cache Model; and Convert the corresponding image tags into one-hot encoding form as the Values of the Cache Model. This Key-Value Cache Model construction method does not require any training overhead because it uses the pre-trained Visual Encoder; and considering that the few-shot training set contains only a small number of images for each category (1 to 16 shots), the Cache Model also takes up almost no additional graphics memory overhead. Refer to the GPU Mem. indicators in Table 1.
For a test image, we will first use CLIP's Visual Encoder to get its features, and then treat the features as Query to Cache Model for downstream few-shot data Knowledge retrieval. Since Keys are also extracted by CLIP's Visual Encoder, they have the same origin as the test image feature Query. We can directly calculate the cosine similarity between them to obtain a Key-Query adjacency matrix. This matrix can be regarded as each The weight corresponding to Value. Therefore, we can calculate the weighted sum of Values to get the classification prediction for this test image obtained by retrieving the Cache Model. In addition, we can also get CLIP's zero-shot prediction by matching the test image features with CLIP's Textual Encoder text features. By linearly weighting the sum of the two, we obtain the final classification prediction, which contains both the image language contrastive knowledge pre-trained by CLIP and the few-shot knowledge of the new downstream data set, so it can achieve more accurate predictions. Strong image classification accuracy.
Based on the network structure of Tip-Adapter, we can further change the Keys part in the Cache Model into learning parameters, that is, they can be updated through training. This solution is Tip-Adapter- F. With the help of the already built Cache Model, Tip-Adapter-F only requires one-tenth of the training rounds and time of the existing CLIP-Adapter to achieve higher performance, as shown in Table 1.
Figure 1: Network flow chart of Tip-Adapter and Tip-Adapter-F
2.Differences and connections between Tip-Adapter and existing solutions
Compare CLIP-Adapter, as shown in Figure 2, Tip-Adapter stores Keys and Values In fact, they can respectively correspond to the two linear layers of the adapter structure in CLIP-Adapter, except that the former does not require training to build, while the latter is randomly initialized and then requires training to learn the best parameters.
Figure 2: Tip-Adapter compared to CLIP-Adapter
Compared with other existing solutions for building Cache Model, as shown in Figure 3, the Cache Model of Tip-Adapter can be regarded as a multi-modal visual language Cache. Because the features output by CLIP's Textual Encoder can be regarded as the Key-Value of the text, which is equivalent to testing the image features as Query, retrieving knowledge in the visual and text Cache respectively. Compared with the existing solution that only contains the visual Cache, Tip-Adapter can utilize multi-modal knowledge to obtain stronger recognition performance.
Figure 3: Tip-Adapter compared to other solutions for building Cache Model
three. Experimental results
1. Classification accuracy in ImageNet
Figure 4 and Table 2 compare Tip-Adapter, Tip-Adapter-F and existing solutions in 1 and 2 , 4, 8, and 16 shots of few-shot image classification accuracy; Table 3 compares the accuracy of Visual Encoder using different CLIP on the 16-shot ImageNet dataset. It can be seen that both of our solutions achieve excellent performance with very little resource overhead.
Figure 4 and Table 2: 1~16- of different methods on the ImageNet dataset Shot image classification accuracy comparison
##Table 5: 16-shot ImageNet Comparison of image classification accuracy of Visual Encoder with different CLIP
2. In another 10 image classification data setsAs shown in Figure 5, we provide The accuracy comparison results of another 10 image classification data sets are obtained, namely StandfordCars, UCF101, Caltech101, Flowers102, SUN397, DTD, EuroSAT, FGVCAircraft, OxfordPets and Food101. As shown in the figure, our Tip-Adapter-F all achieved the highest recognition accuracy.
Figure 5: 1~16-shot of different methods on another 10 data sets Image classification accuracy comparison
3. Evaluation of Domain Generalization Ability
We also tested the performance of Tip-Adapter and Tip-Adapter-F in Domain Generalization. As shown in Table 6, both of our schemes exhibit strong robustness and feature transfer capabilities.
Four. Conclusion
This paper proposes Tip-Adapter, a training-free solution for using CLIP for downstream few-shot image classification. Tip-Adapter builds a Key-Value Cache Model as a knowledge retrieval database for the test image Query, and obtains stronger recognition performance by fusing the predictions of the Cache Model and the zero-shot predictions of CLIP. We hope that Tip-Adapter can inspire more follow-up work on efficient migration of pre-trained models.
The above is the detailed content of Without downstream training, Tip-Adapter greatly improves CLIP image classification accuracy. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Current deep edge detection networks usually adopt an encoder-decoder architecture, which contains up and down sampling modules to better extract multi-level features. However, this structure limits the network to output accurate and detailed edge detection results. In response to this problem, a paper on AAAI2024 provides a new solution. Thesis title: DiffusionEdge:DiffusionProbabilisticModelforCrispEdgeDetection Authors: Ye Yunfan (National University of Defense Technology), Xu Kai (National University of Defense Technology), Huang Yuxing (National University of Defense Technology), Yi Renjiao (National University of Defense Technology), Cai Zhiping (National University of Defense Technology) Paper link: https ://ar

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

In time for the Spring Festival, version 1.5 of Tongyi Qianwen Model (Qwen) is online. This morning, the news of the new version attracted the attention of the AI community. The new version of the large model includes six model sizes: 0.5B, 1.8B, 4B, 7B, 14B and 72B. Among them, the performance of the strongest version surpasses GPT3.5 and Mistral-Medium. This version includes Base model and Chat model, and provides multi-language support. Alibaba’s Tongyi Qianwen team stated that the relevant technology has also been launched on the Tongyi Qianwen official website and Tongyi Qianwen App. In addition, today's release of Qwen 1.5 also has the following highlights: supports 32K context length; opens the checkpoint of the Base+Chat model;

Large language models (LLMs) typically have billions of parameters and are trained on trillions of tokens. However, such models are very expensive to train and deploy. In order to reduce computational requirements, various model compression techniques are often used. These model compression techniques can generally be divided into four categories: distillation, tensor decomposition (including low-rank factorization), pruning, and quantization. Pruning methods have been around for some time, but many require recovery fine-tuning (RFT) after pruning to maintain performance, making the entire process costly and difficult to scale. Researchers from ETH Zurich and Microsoft have proposed a solution to this problem called SliceGPT. The core idea of this method is to reduce the embedding of the network by deleting rows and columns in the weight matrix.

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

In April last year, researchers from the University of Wisconsin-Madison, Microsoft Research, and Columbia University jointly released LLaVA (Large Language and Vision Assistant). Although LLaVA is only trained with a small multi-modal instruction data set, it shows very similar inference results to GPT-4 on some samples. Then in October, they launched LLaVA-1.5, which refreshed the SOTA in 11 benchmarks with simple modifications to the original LLaVA. The results of this upgrade are very exciting, bringing new breakthroughs to the field of multi-modal AI assistants. The research team announced the launch of LLaVA-1.6 version, targeting reasoning, OCR and

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.
