Recently, Stable Diffusion has become an emerging research direction. A blogger named Matthias Bühlmann tried to experimentally explore the power of this model and found that Stable Diffusion is a very powerful lossy image compression codec. He wrote a blog describing this experimental analysis process. The following is the original blog text.
First of all, Matthias Bühlmann gives the compression results of the Stable Diffusion method and JPG and WebP under high compression factor conditions. All results are at a resolution of 512x512 pixels:
San Francisco landscape, from left to right: JPG (6.16kB), WebP (6.80kB), Stable Diffusion: (4.96kB).
Candy shop, from left to right: JPG (5.68kB), WebP (5.71 kB), Stable Diffusion (4.98kB).
#Animal photos, left to right: JPG (5.66 kB), WebP (6.74kB), Stable Diffusion (4.97kB).
#These examples clearly show that compressing images with Stable Diffusion preserves better image quality at smaller file sizes compared to JPG and WebP.
Exploration ExperimentMatthias Bühlmann analyzed the working principle. Stable Diffusion uses three series-trained artificial neural networks:
VAE encodes and decodes images in image space into some underlying spatial representation. The latent spatial representation of the source image (512 x 512, 3x8 or 4x8 bit) will have a lower resolution (64 x 64) and a higher accuracy (4x32 bit).
VAE learns by itself during the training process. As the model is gradually trained, the latent space representation of different versions of the model may look different, such as the latent space representation of Stable Diffusion v1.4 The spatial representation is as follows (remapped to a 4-channel color image):
When re-expanded and interpreted latent features into color values (using alpha channel ), the main features of the image are still visible, and VAE also encodes higher-resolution features into pixel values.
For example, through a VAE encoding/decoding roundtrip, the following results are obtained:
It is worth noting that this roundtrip is not lossless. For example, the white words on the blue tape in the picture are slightly less readable after decoding. The VAE of the Stable Diffusion v1.4 model is generally not very good at representing small text and faces.
We know that the main purpose of Stable Diffusion is to generate images based on text descriptions, which requires the model to operate on the latent spatial representation of the image. The model uses a trained U-Net to iteratively denoise the latent space image, outputting what it "sees" (predicts) in the noise, similar to how we sometimes see clouds as shapes or faces. In the iterative denoising step, a third ML model (text encoder) guides U-Net to try to see different information.
Matthias Bühlmann analyzes how the latent representation generated by VAE can be effectively compressed. He found that sampling the latent representation in VAE or applying existing lossy image compression methods to the latent representation significantly degrades the quality of the reconstructed image, while the VAE decoding process appears to be relatively robust to the quality of the latent representation.
Matthias Bühlmann quantized the latent representation from floating point to 8-bit unsigned integers and found only very small reconstruction errors. As shown in the figure below, left: 32-bit floating point potential representation; middle: ground truth; right: 8-bit integer potential representation.
He also found that through further quantization through palette and dithering algorithms, the results obtained would be unexpectedly good. However, when decoding directly using VAE, the palettized representation leads to some visible artifacts:
Left: 32-bit latent representation; Middle: 8-bit quantized latent representation; Right: palettized 8-bit latent representation with Floyd-Steinberg dither
Palettized representation with Floyd-Steinberg jitter introduces noise, distorting the decoding results. So Matthias Bühlmann used U-Net to remove the noise caused by jitter. After 4 iterations, the reconstructed result is visually very close to the unquantized version:
Reconstructed result (left : palettized representation with Floyd-Steinberg jitter; middle: denoising after four iterations; right: Ground Truth).
#While the results are very good, some artifacts are introduced, such as the glossy shadow on the center symbol above.
Although subjectively, the results of Stable Diffusion compressed images are much better than JPG and WebP, from the perspective of PSNR, SSIM and other indicators, Stable Diffusion has no obvious advantages.
As shown in the figure below, although Stable Diffusion as a codec is much better than other methods in retaining image granularity, it is affected by compression artifacts, the shape of objects in the image, etc. Characteristics subject to change.
Left: JPG compression; middle: Ground Truth; right: Stable Diffusion compression.
It is worth noting that the current Stable Diffusion v1.4 model cannot well preserve text information and facial features with small fonts during the compression process , but the Stable Diffusion v1.5 model has improved in face generation.
##Left: Ground Truth; Middle: after VAE roundtrip (32-bit latent features) ; Right: Results of decoding from palettized denoised 8-bit latent features.
After the blog was published, Matthias Bühlmann’s experimental analysis aroused everyone’s discussion.
Matthias Bühlmann himself believes that the image compression effect of Stable Diffusion is better than expected, and U-Net seems to be able to effectively eliminate the noise introduced by dithering. However, future versions of the Stable Diffusion model may no longer have this image compression feature.
However, some netizens questioned: "VAE itself is used for image compression." For example, the Transformer-based image compression method TIC uses VAE architecture, so Matthias Bühlmann's experiment seems to be overkill.
What do you think of this?
The above is the detailed content of Smaller files, higher quality, can the popular Stable Diffusion compress images?. For more information, please follow other related articles on the PHP Chinese website!