Table of Contents
Experiment
Home Technology peripherals AI VPR 2024 perfect score paper! Meta proposes EfficientSAM: quickly split everything!

VPR 2024 perfect score paper! Meta proposes EfficientSAM: quickly split everything!

Mar 02, 2024 am 10:10 AM
Research train Encoder

EfficientSAM This work was included in CVPR 2024 with a perfect score of 5/5/5! The author shared the result on a social media, as shown in the picture below:

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

LeCun Turing Award winner also strongly recommended this work!

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

In recent research, Meta researchers have proposed a new improved method, which uses SAM masking. Code image pre-training (SAMI). This approach combines MAE pre-training techniques and SAM models to achieve high-quality pre-trained ViT encoders. Through SAMI, researchers try to improve the performance and efficiency of the model and provide better solutions for vision tasks. The proposal of this method brings new ideas and opportunities to further explore and develop the fields of computer vision and deep learning. By combining different pre-training techniques and model structures, researchers continue to


VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!


  • Paper link: https://arxiv.org/pdf/2312.00863
  • ##Code: github.com/yformer/ EfficientSAM
  • Homepage: https://yformer.github.io/efficient-sam/

This approach reduces the complexity of SAM while maintaining good performance. Specifically, SAMI utilizes the SAM encoder ViT-H to generate feature embeddings and trains a mask image model with a lightweight encoder, thereby reconstructing features from SAM's ViT-H instead of image patches, and the resulting universal ViT backbone can be used downstream Tasks such as image classification, object detection and segmentation, etc. We then use the SAM decoder to fine-tune the pre-trained lightweight encoder to complete any segmentation task.

To verify the effectiveness of this approach, the researchers used a transfer learning setting pre-trained on masked images. Specifically, they first pre-trained the model with reconstruction loss on the ImageNet dataset with an image resolution of 224×224. They then fine-tune the model using supervised data from the target task. This transfer learning method can help the model learn quickly and improve performance on new tasks because the model has learned to extract features from the original data through the pre-training stage. This transfer learning strategy effectively utilizes the knowledge learned on large-scale data sets, making it easier for the model to adapt to different tasks. At the same time,

through SAMI pre-training, it can be used on ImageNet- Train models such as ViT-Tiny/-Small/-Base on 1K and improve generalization performance. For the ViT-Small model, after 100 times of fine-tuning on ImageNet-1K, the researchers achieved a Top-1 accuracy of 82.7%, which is better than other state-of-the-art image pre-training baselines.

The researchers fine-tuned the pre-trained model on target detection, instance segmentation and semantic segmentation. In all these tasks, our method achieves better results than other pre-trained baselines, and more importantly, achieves significant gains on small models.

Yunyang Xiong, the author of the paper, said: The EfficientSAM parameters proposed in this article are reduced by 20 times, but the running time is 20 times faster. The difference with the original SAM model is only within 2 percentage points, which is greatly Better than MobileSAM/FastSAM.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

In the demo demonstration, click on the animal in the picture, and EfficientSAM can quickly segment the object:

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

EfficientSAM can also accurately identify the person in the picture:

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

Trial address: https: //ab348ea7942fe2af48.gradio.live/

Method

EfficientSAM contains two stages: 1) Pre-training SAMI on ImageNet ( Top); 2) Fine-tuning SAM on SA-1B (bottom).

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

EfficientSAM mainly contains the following components:

Cross-attention decoder: Under the supervision of SAM features, this paper observes that only The mask token needs to be reconstructed by the decoder, and the output of the encoder can act as anchors during the reconstruction process. In the cross-attention decoder, the query comes from the masked tokens, and the keys and values ​​are derived from the unmasked features and masked features from the encoder. This paper merges the output features from the masked tokens of the cross-attention decoder and the output features of the unmasked tokens from the encoder for MAE output embedding. These combined features will then be reordered to the original positions of the input image tokens in the final MAE output.

Linear projection head. We then fed the image outputs obtained through the encoder and cross-attention decoder into a small project head to align the features in the SAM image encoder. For simplicity, this paper only uses a linear projection head to solve the feature dimension mismatch between the SAM image encoder and MAE output.

Reconstruction loss. In each training iteration, SAMI includes forward feature extraction from the SAM image encoder and forward and backpropagation processes of the MAE. The outputs from the SAM image encoder and the MAE linear projection head are compared to calculate the reconstruction loss.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

After pre-training, the encoder can extract feature representations for various visual tasks, and the decoder will also be discarded. In particular, in order to build an efficient SAM model for any segmentation task, this paper adopts SAMI pre-trained lightweight encoders (such as ViT-Tiny and ViT-Small) as the image encoder of EfficientSAM and the default mask decoder of SAM. , as shown in Figure 2 (bottom). This paper fine-tunes the EfficientSAM model on the SA-1B dataset to achieve segmentation of any task.

Experiment

Image classification. In order to evaluate the effectiveness of this method on image classification tasks, the researchers applied SAMI ideas to the ViT model and compared their performance on ImageNet-1K.

As shown in Table 1, SAMI is compared with pre-training methods such as MAE, iBOT, CAE and BEiT, and distillation methods such as DeiT and SSTA.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

SAMI-B’s top1 accuracy reaches 84.8%, which is higher than the pre-trained baseline, MAE, DMAE, iBOT, CAE and BEiT. SAMI also shows large improvements compared to distillation methods such as DeiT and SSTA. For lightweight models such as ViT-Tiny and ViT-Small, SAMI results show significant gains compared to DeiT, SSTA, DMAE, and MAE.

Object detection and instance segmentation. This paper also extends the SAMI-pretrained ViT backbone to downstream object detection and instance segmentation tasks and compares it with a baseline pre-trained on the COCO dataset. As shown in Table 2, SAMI consistently outperforms the performance of other baselines.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

These experimental results show that the pre-trained detector backbone provided by SAMI is very effective in object detection and instance segmentation tasks. efficient.

Semantic segmentation. This paper further extends the pre-trained backbone to semantic segmentation tasks to evaluate its effectiveness. The results are shown in Table 3. Mask2former using SAMI pre-trained backbone achieves better mIoU on ImageNet-1K than using MAE pre-trained backbone. These experimental results verify that the technology proposed in this paper can generalize well to various downstream tasks.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

Table 4 compares EfficientSAMs with SAM, MobileSAM, and SAM-MAE-Ti. On COCO, EfficientSAM-Ti outperforms MobileSAM. EfficientSAM-Ti has SAMI pre-trained weights and also performs better than MAE pre-trained weights.

In addition, EfficientSAM-S is only 1.5 mIoU lower than SAM on the COCO box and 3.5 mIoU lower than SAM on the LVIS box, with 20 times fewer parameters. This paper also found that EfficientSAM also showed good performance in multiple clicks compared with MobileSAM and SAM-MAE-Ti.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

Table 5 shows the AP, APS, APM and APL for zero-shot instance segmentation. The researchers compared EfficientSAM with MobileSAM and FastSAM, and it can be seen that compared to FastSAM, EfficientSAM-S gained more than 6.5 APs on COCO and 7.8 APs on LVIS. In the case of EffidientSAM-Ti, it is still significantly better than FastSAM, with 4.1 APs on COCO and 5.3 APs on LVIS, while MobileSAM has 3.6 APs on COCO and 5.5 APs on LVIS.

Moreover, EfficientSAM is much lighter than FastSAM. The parameters of efficientSAM-Ti are 9.8M, while the parameters of FastSAM are 68M.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

Figures 3, 4, and 5 provide some qualitative results so that readers can have a complementary understanding of the instance segmentation capabilities of EfficientSAMs.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!


VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!


VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

#For more research details, please refer to the original paper.

The above is the detailed content of VPR 2024 perfect score paper! Meta proposes EfficientSAM: quickly split everything!. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

Abandon the encoder-decoder architecture and use the diffusion model for edge detection, which is more effective. The National University of Defense Technology proposed DiffusionEdge Abandon the encoder-decoder architecture and use the diffusion model for edge detection, which is more effective. The National University of Defense Technology proposed DiffusionEdge Feb 07, 2024 pm 10:12 PM

Current deep edge detection networks usually adopt an encoder-decoder architecture, which contains up and down sampling modules to better extract multi-level features. However, this structure limits the network to output accurate and detailed edge detection results. In response to this problem, a paper on AAAI2024 provides a new solution. Thesis title: DiffusionEdge:DiffusionProbabilisticModelforCrispEdgeDetection Authors: Ye Yunfan (National University of Defense Technology), Xu Kai (National University of Defense Technology), Huang Yuxing (National University of Defense Technology), Yi Renjiao (National University of Defense Technology), Cai Zhiping (National University of Defense Technology) Paper link: https ://ar

Tongyi Qianwen is open source again, Qwen1.5 brings six volume models, and its performance exceeds GPT3.5 Tongyi Qianwen is open source again, Qwen1.5 brings six volume models, and its performance exceeds GPT3.5 Feb 07, 2024 pm 10:15 PM

In time for the Spring Festival, version 1.5 of Tongyi Qianwen Model (Qwen) is online. This morning, the news of the new version attracted the attention of the AI ​​community. The new version of the large model includes six model sizes: 0.5B, 1.8B, 4B, 7B, 14B and 72B. Among them, the performance of the strongest version surpasses GPT3.5 and Mistral-Medium. This version includes Base model and Chat model, and provides multi-language support. Alibaba’s Tongyi Qianwen team stated that the relevant technology has also been launched on the Tongyi Qianwen official website and Tongyi Qianwen App. In addition, today's release of Qwen 1.5 also has the following highlights: supports 32K context length; opens the checkpoint of the Base+Chat model;

Large models can also be sliced, and Microsoft SliceGPT greatly increases the computational efficiency of LLAMA-2 Large models can also be sliced, and Microsoft SliceGPT greatly increases the computational efficiency of LLAMA-2 Jan 31, 2024 am 11:39 AM

Large language models (LLMs) typically have billions of parameters and are trained on trillions of tokens. However, such models are very expensive to train and deploy. In order to reduce computational requirements, various model compression techniques are often used. These model compression techniques can generally be divided into four categories: distillation, tensor decomposition (including low-rank factorization), pruning, and quantization. Pruning methods have been around for some time, but many require recovery fine-tuning (RFT) after pruning to maintain performance, making the entire process costly and difficult to scale. Researchers from ETH Zurich and Microsoft have proposed a solution to this problem called SliceGPT. The core idea of ​​this method is to reduce the embedding of the network by deleting rows and columns in the weight matrix.

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Jun 11, 2024 am 09:51 AM

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

LLaVA-1.6, which catches up with Gemini Pro and improves reasoning and OCR capabilities, is too powerful LLaVA-1.6, which catches up with Gemini Pro and improves reasoning and OCR capabilities, is too powerful Feb 01, 2024 pm 04:51 PM

In April last year, researchers from the University of Wisconsin-Madison, Microsoft Research, and Columbia University jointly released LLaVA (Large Language and Vision Assistant). Although LLaVA is only trained with a small multi-modal instruction data set, it shows very similar inference results to GPT-4 on some samples. Then in October, they launched LLaVA-1.5, which refreshed the SOTA in 11 benchmarks with simple modifications to the original LLaVA. The results of this upgrade are very exciting, bringing new breakthroughs to the field of multi-modal AI assistants. The research team announced the launch of LLaVA-1.6 version, targeting reasoning, OCR and

The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks Apr 29, 2024 pm 06:55 PM

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

See all articles