This paper successfully proposes the first universal de-flicker method that requires no additional guidance or understanding of flicker and can eliminate various flicker artifacts.
#High-quality videos are usually consistent in time, but many videos will exhibit flickering for various reasons. For example, the brightness of old movies can be very unstable due to the poor quality of some old camera hardware and the inability to set the exposure time to the same for each frame. Additionally, high-speed cameras with very short exposure times can capture high-frequency (e.g., 60 Hz) changes in indoor lighting.
#Flickering may occur when applying image algorithms to temporally consistent videos, such as image enhancement, image colorization, and style transfer, among other efficient processing algorithms.
Video generated by video generation methods may also contain flicker artifacts.
Removing flicker from video is very popular in the fields of video processing and computational photography since temporally consistent videos are generally more visually appealing.
This CVPR 2023 paper is dedicated to researching a universal flicker removal method that: (1) has high generalization to various flicker patterns or levels (e.g., old movies, slow motion pictures captured by high-speed cameras) action video), (2) only requires a flicker video and does not require other auxiliary information (e.g., flicker type, additional time-consistent videos). Since this method does not make too many assumptions, it has a wide range of application scenarios.
Code link: https://github.com/ChenyangLEI/All-in-one-Deflicker
Project link: https://chenyanglei. github.io/deflicker
Paper link: https://arxiv.org/pdf/2303.08120.pdf
General flicker removal methods are challenging, Because it's difficult to enforce temporal consistency throughout the video without any additional guidance.
Existing techniques usually design specific strategies for each flicker type and use specific knowledge. For example, for slow-motion videos captured by high-speed cameras, previous work can analyze lighting frequencies. For videos processed by image processing algorithms, the blind video temporal consistency algorithm can use the temporally consistent unprocessed video as a reference to obtain long-term consistency. However, flicker types or unprocessed videos are not always available, so existing flicker-specific algorithms cannot be applied to this case.
An intuitive solution is to use optical flow to track correspondences. However, the optical flow obtained from flicker videos is not accurate enough, and the cumulative error of optical flow also increases with the number of frames.
Through two key observations and designs, the author successfully proposed a general de-flickering method that can eliminate various flickering artifacts without additional guidance.
A good blind de-flicker model should have the ability to track corresponding points between all video frames. Most network structures in video processing can only take a small number of frames as input, resulting in a small receptive field and unable to guarantee long-term consistency. The researchers observed that neural atlases are well suited for the flicker elimination task and will therefore introduce neural atlases to this task. Neural atlases are a unified and concise representation of all pixels in a video. As shown in Figure (a), let p be a pixel, and each pixel p is input into the mapping network M, which predicts 2D coordinates (up, vp), indicating the corresponding position of the pixel in the atlas. Ideally, corresponding points between different frames should share a pixel in the atlas, even if the input pixels are of different colors. That is, this ensures temporal consistency.
Secondly, although the frames obtained from the shared layers are consistent, the structure of the images is flawed: neural layers cannot easily model dynamic objects with large motion; Optical flow isn't perfect either. Therefore, the authors propose a neural filtering strategy to pick good parts from defective layers. The researchers trained a neural network to learn invariance under two types of distortion, which simulate artifacts in layers and flicker in videos. When tested, the network worked well as a filter to preserve consistency properties and block artifacts in defective layers.
The researchers constructed a data set containing various real flicker videos. Extensive experiments show that our method achieves satisfactory de-flicker effects on multiple types of flicker videos. The researchers' algorithm even outperformed baseline methods using additional guidance on public benchmarks.
The researcher provides (a) a quantitative comparison of the processed flicker video and the synthesized flicker video. The deformation error of the researcher's method is much smaller than the baseline. , according to PSNR, the researcher's results are also closer to the real value on synthetic data. For other real-world videos, the study provided (b) a double-blind experiment for comparison, and most users preferred the researchers' results.
As shown in the figure above, the researcher's algorithm can very well remove flicker from the input video. Note that the third column of pictures shows the results of the neural layer. Obvious defects can be observed, but the researcher's algorithm can make good use of its consistency and avoid introducing these defects.
This framework can remove different categories of flicker contained in old movies and AI-generated videos.
##
The above is the detailed content of Remove video flicker with one click, this study proposes a general framework. For more information, please follow other related articles on the PHP Chinese website!