YoloCS: Effectively reduce the space complexity of feature maps
Paper address: YOLOCS: Object Detection based on Dense Channel Compression for Feature Spatial Solidification (arxiv.org)
01 Total Description
In today’s sharing, the researchers examined the correlation between channel features and convolution kernels during feature purification and gradient backpropagation, focusing on the forward and backward directions within the network. spread. Therefore, the researchers proposed a feature space solidification method called dense channel compression. Based on the core concepts of the method, two innovative modules for backbone and head networks are introduced: dense channel compression (DCFS) for feature space solidification and asymmetric multi-level compression decoupled head (ADH). When integrated into the YOLOv5 model, these two modules demonstrated extraordinary performance, resulting in an improved model known as YOLOCS.
Evaluated on the MSCOCO data set, the AP of the large, medium and small YOLOCS models are 50.1%, 47.6% and 42.5% respectively . While maintaining a similar inference speed to the YOLOv5 model, the large, medium, and small YOLOCS models achieved 1.1%, 2.3%, and 5.2% advantages over YOLOv5's AP respectively.
02 Background
In recent years, target detection technology has received widespread attention in the field of computer vision. Among them, the target detection technology based on the single shot multi-box algorithm (Single Shot Multi Box Detector, referred to as SSD) and the target detection technology based on the convolutional neural network (Convolutional Neural Networks, referred to as CNN) are the two most commonly used target detection technologies. However, due to the low accuracy of the single-shot multi-frame algorithm and the high computational complexity of target detection technology based on convolutional neural networks, finding an efficient and high-precision target detection technology has become a hot spot in current research. one.
Dense Channel Compression (DCC) is a new convolutional neural network compression technology that compresses the feature map of the convolutional neural network. Perform spatial solidification to achieve compression and acceleration of network parameters. However, the application of DCC technology in the field of target detection has not been fully studied. Dense Channel Compression (DCC) technology aims to improve computational efficiency by reducing the number of network parameters. Specifically, DCC reduces the number of parameters of the convolutional layer by performing channel compression on the output feature map of the convolutional layer. This compression technique can be achieved by removing redundant and unnecessary channels, or using methods such as low-rank decomposition. Although DCC technology is effective in image classification tasks
Therefore, a target detection technology based on Dense Channel Compression is proposed, named YOLOCS (YOLO with Dense Channel Compression). YOLOCS technology combines DCC technology with the YOLO (You Only Look Once) algorithm to achieve efficient and high-precision processing of target detection. Specifically, YOLOCS technology uses DCC technology to spatially solidify the feature map, thereby achieving precise positioning of the target position; at the same time, YOLOCS technology uses the single-shot multi-frame algorithm characteristics of the YOLO algorithm to achieve target classification. Quick calculation.
03 New Framework
- Dense Channel Compression for Feature Spatial Solidification Structure (DCFS)
In the proposed method (above (c)), the researchers not only solved the balance problem between network width and depth, but also solved the problem of network width and depth through 3 × 3 volumes. The product compresses features from different depth layers, reducing the number of channels by half before outputting and fusing features. This approach enables researchers to refine feature outputs from different layers to a greater extent, thereby enhancing feature diversity and effectiveness during the fusion stage.
In addition, the compressed features from each layer are carried with larger convolution kernel weights (3×3), effectively expanding the receptive field of the output features. This approach is called feature space solidified dense channel compression. The rationale behind dense channel compression for feature space solidification relies on utilizing larger convolution kernels to facilitate channel compression. This technique has two key advantages: First, it expands the receptive field of feature perception during forward propagation, thereby ensuring that region-related feature details are incorporated to minimize feature loss throughout the compression stage. Second, the enhancement of error details during error backpropagation allows for more accurate weight adjustment.
To further illustrate these two advantages, convolution with two different kernel types (1×1 and 3×3) is used to compress the two channels, as shown below:
#The network structure of DCFS is shown in the figure below. A three-layer bottleneck structure is adopted to gradually compress the channel during the network forward propagation process. Half-channel 3×3 convolution is applied to all branches, followed by batch normalization (BN) and activation function layers. Subsequently, a 1 × 1 convolutional layer is used to compress the output feature channels to match the input feature channels.
- ##Asymmetric Multi-level Channel Compression Decoupled Head (ADH)
In order to solve the decoupling head problem in the YOLOX model, the researchers conducted a series of studies and experiments. The results reveal a logical correlation between the utilization of decoupled head structures and the associated loss functions. Specifically, for different tasks, the structure of the decoupling head should be adjusted according to the complexity of the loss calculation. In addition, when the decoupled head structure is applied to various tasks, directly compressing the feature channels of the previous layer (as shown below) into task channels may result in significant feature loss due to differences in final output dimensions. This, in turn, can adversely affect the overall performance of the model.
# Additionally, when considering the proposed dense channel compression method for feature space solidification, the number of channels in the final layer is directly reduced to match the output Channels may cause feature loss during forward propagation, thus degrading network performance. At the same time, in the context of backpropagation, this structure may lead to suboptimal error backpropagation, hindering the achievement of gradient stability. To address these challenges, a new decoupling head is introduced, called an asymmetric multi-stage compression decoupling head (see Figure (b) below).
#Specifically, the researchers deepened the network path dedicated to the target scoring task and used 3 convolutions to expand the perception of this task field and number of parameters. At the same time, the features of each convolutional layer are compressed along the channel dimension. This method not only effectively alleviates the training difficulty related to the target scoring task and improves model performance, but also greatly reduces the parameters and GFLOPs of the decoupled head module, thereby significantly improving the inference speed. Furthermore, 1 convolutional layer is used to separate the classification and bounding box tasks. This is because for matched positive samples, the losses associated with both tasks are relatively small, thus avoiding over-extension. This approach significantly reduces parameters and GFLOPs in the decoupling header, ultimately increasing inference speed.
04 Experiment VisualizationAblation Experiment on MS-COCO val2017
Comparison of YOLOCS, YOLOX and YOLOv5- r6.1[7] in terms of AP on MS-COCO 2017 test-dev
The above is the detailed content of YoloCS: Effectively reduce the space complexity of feature maps. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Meta's Llama 3.2: A Leap Forward in Multimodal and Mobile AI Meta recently unveiled Llama 3.2, a significant advancement in AI featuring powerful vision capabilities and lightweight text models optimized for mobile devices. Building on the success o

Hey there, Coding ninja! What coding-related tasks do you have planned for the day? Before you dive further into this blog, I want you to think about all your coding-related woes—better list those down. Done? – Let’

This week's AI landscape: A whirlwind of advancements, ethical considerations, and regulatory debates. Major players like OpenAI, Google, Meta, and Microsoft have unleashed a torrent of updates, from groundbreaking new models to crucial shifts in le

Shopify CEO Tobi Lütke's recent memo boldly declares AI proficiency a fundamental expectation for every employee, marking a significant cultural shift within the company. This isn't a fleeting trend; it's a new operational paradigm integrated into p

Introduction OpenAI has released its new model based on the much-anticipated “strawberry” architecture. This innovative model, known as o1, enhances reasoning capabilities, allowing it to think through problems mor

Introduction Imagine walking through an art gallery, surrounded by vivid paintings and sculptures. Now, what if you could ask each piece a question and get a meaningful answer? You might ask, “What story are you telling?

Meta's Llama 3.2: A Multimodal AI Powerhouse Meta's latest multimodal model, Llama 3.2, represents a significant advancement in AI, boasting enhanced language comprehension, improved accuracy, and superior text generation capabilities. Its ability t

For those of you who might be new to my column, I broadly explore the latest advances in AI across the board, including topics such as embodied AI, AI reasoning, high-tech breakthroughs in AI, prompt engineering, training of AI, fielding of AI, AI re
