The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
based on heat conduction. Treat image feature blocks as heat sources, and extract image features by predicting thermal conductivity and using physical heat conductionprinciples. Compared with visual models based on the Attention mechanism, vHeat takes into account: computational complexity (1.5th power), global receptive field, and physical interpretability. When using vHeat-base+%E6%A8%A1%E5%9E%8B for high-resolution image input, the put, GPU memory usage, and flops are Swin-base+%E6%A8%A1 respectively. 3 times, 1/4, 3/4 of %E5%9E%8B. It has achieved advanced performance on basic downstream tasks such as image classification, target detection, and semantic/instance segmentation.
The two most mainstream basic visual models currently are CNN and Visual Transformer (ViT). However, the performance of CNN is limited by local receptive fields and fixed convolution kernel operators. ViT has the ability to represent global dependencies, but at the cost of high quadratic norm computational complexity. We believe that the convolution operators and self-attention operators of CNN and ViT are both pixel transfer processes within features, which are respectively a form of information transfer, which also reminds us of heat conduction in the physical field. So based on the heat conduction equation, we connected the spatial propagation of visual semantics with physical heat conduction, proposed a visual conduction operator (Heat Conduction Operator, HCO) with 1.5 power computational complexity, and then designed a heat conduction operator with low Visual representation model vHeat for complexity, global receptive field, and physical interpretability. The calculation form and complexity comparison between HCO and self-attention are shown in the figure below. Experiments have proven that vHeat performs well in various visual tasks. For example, vHeat-T achieves 82.2% classification accuracy on ImageNet-1K, which is 0.9% higher than Swin-T and 1.7% higher than ViM-S. In addition to performance, vHeat also has the advantages of high inference speed, low GPU memory usage and low FLOPs. When the input image resolution is high, the base-scale vHeat model only has 1/3 more throughput, 1/4 GPU memory usage, and 3/4 FLOPs compared to Swin.
Method introductionUse
to represent the temperature of pointat time t, and the physical heat conduction equation is , where k>0, represents thermal diffusivity. Given the initial conditions at time t=0, the heat conduction equation can be solved by using Fourier transform, which is expressed as follows:
where and represent Fourier transform and inverse Fourier transform respectively, represents frequency domain spatial coordinates.
We use HCO to implement heat conduction in visual semantics. First, we extend the in the physical heat conduction equation to a multi-channel feature , and treat as input and as output. , HCO simulates the general solution of heat conduction in the discretized form, as shown in the following formula:
where and represent the two-dimensional discrete cosine transform and the inverse transform respectively, The structure of HCO is shown in Figure (a) below.
In addition, we believe that different image contents should correspond to different thermal diffusivities. Considering that the output of is in the frequency domain, we determine the thermal diffusion based on the frequency value Rate,. Since different positions in the frequency domain represent different frequency values, we propose Frequency Value Embeddings (FVEs) to represent frequency value information, which is similar to the implementation and function of absolute position encoding in ViT, and use FVEs to control heat diffusion. The rate k is predicted so that HCO can perform non-uniform and adaptive conduction, as shown in the figure below.
vHeat is implemented using a multi-level structure, as shown in the figure below. The overall framework is similar to the mainstream visual model, and the HCO layer is shown in Figure 2 (b).
Experimental results
ImageNet classification
Downstream tasks
On the COCO data set, vHeat also has a performance advantage: in the case of fine-tune 12 epochs , vHeat-T/S/B reaches 45.1/46.8/47.7 mAP respectively, exceeding Swin-T/S/B by 2.4/2.0/0.8 mAP, and exceeding ConvNeXt-T/S/B by 0.9/1.4/0.7 mAP. On the ADE20K data set, vHeat-T/S/B reached 46.9/49.0/49.6 mIoU respectively, which still has better performance than Swin and ConvNeXt. These results verify that vHeat fully works in visual downstream experiments, demonstrating the potential to replace mainstream basic visual models.
Analysis Experiment
Effective Feeling Field
The above is the detailed content of The visual representation model vHeat inspired by physical heat transfer is here. It attempts to break through the attention mechanism and has both low complexity and global receptive field.. For more information, please follow other related articles on the PHP Chinese website!