Home > Technology peripherals > AI > body text

Smaller files, higher quality, can the popular Stable Diffusion compress images?

王林
Release: 2023-04-12 21:16:24
forward
912 people have browsed it

Recently, Stable Diffusion has become an emerging research direction. A blogger named Matthias Bühlmann tried to experimentally explore the power of this model and found that Stable Diffusion is a very powerful lossy image compression codec. He wrote a blog describing this experimental analysis process. The following is the original blog text.

First of all, Matthias Bühlmann gives the compression results of the Stable Diffusion method and JPG and WebP under high compression factor conditions. All results are at a resolution of 512x512 pixels:

Smaller files, higher quality, can the popular Stable Diffusion compress images?

San Francisco landscape, from left to right: JPG (6.16kB), WebP (6.80kB), Stable Diffusion: (4.96kB).

Smaller files, higher quality, can the popular Stable Diffusion compress images?

Candy shop, from left to right: JPG (5.68kB), WebP (5.71 kB), Stable Diffusion (4.98kB).

Smaller files, higher quality, can the popular Stable Diffusion compress images?

#Animal photos, left to right: JPG (5.66 kB), WebP (6.74kB), Stable Diffusion (4.97kB).

#These examples clearly show that compressing images with Stable Diffusion preserves better image quality at smaller file sizes compared to JPG and WebP.

Exploration Experiment

Matthias Bühlmann analyzed the working principle. Stable Diffusion uses three series-trained artificial neural networks:

  • Variational Auto Encoder (VAE)
  • U-Net
  • Text encoding Text Encoder

VAE encodes and decodes images in image space into some underlying spatial representation. The latent spatial representation of the source image (512 x 512, 3x8 or 4x8 bit) will have a lower resolution (64 x 64) and a higher accuracy (4x32 bit).

VAE learns by itself during the training process. As the model is gradually trained, the latent space representation of different versions of the model may look different, such as the latent space representation of Stable Diffusion v1.4 The spatial representation is as follows (remapped to a 4-channel color image):

Smaller files, higher quality, can the popular Stable Diffusion compress images?

When re-expanded and interpreted latent features into color values ​​(using alpha channel ), the main features of the image are still visible, and VAE also encodes higher-resolution features into pixel values.

For example, through a VAE encoding/decoding roundtrip, the following results are obtained:

Smaller files, higher quality, can the popular Stable Diffusion compress images?

It is worth noting that this roundtrip is not lossless. For example, the white words on the blue tape in the picture are slightly less readable after decoding. The VAE of the Stable Diffusion v1.4 model is generally not very good at representing small text and faces.

We know that the main purpose of Stable Diffusion is to generate images based on text descriptions, which requires the model to operate on the latent spatial representation of the image. The model uses a trained U-Net to iteratively denoise the latent space image, outputting what it "sees" (predicts) in the noise, similar to how we sometimes see clouds as shapes or faces. In the iterative denoising step, a third ML model (text encoder) guides U-Net to try to see different information.

Matthias Bühlmann analyzes how the latent representation generated by VAE can be effectively compressed. He found that sampling the latent representation in VAE or applying existing lossy image compression methods to the latent representation significantly degrades the quality of the reconstructed image, while the VAE decoding process appears to be relatively robust to the quality of the latent representation.

Matthias Bühlmann quantized the latent representation from floating point to 8-bit unsigned integers and found only very small reconstruction errors. As shown in the figure below, left: 32-bit floating point potential representation; middle: ground truth; right: 8-bit integer potential representation.

Smaller files, higher quality, can the popular Stable Diffusion compress images?

He also found that through further quantization through palette and dithering algorithms, the results obtained would be unexpectedly good. However, when decoding directly using VAE, the palettized representation leads to some visible artifacts:

Smaller files, higher quality, can the popular Stable Diffusion compress images?

Left: 32-bit latent representation; Middle: 8-bit quantized latent representation; Right: palettized 8-bit latent representation with Floyd-Steinberg dither

Palettized representation with Floyd-Steinberg jitter introduces noise, distorting the decoding results. So Matthias Bühlmann used U-Net to remove the noise caused by jitter. After 4 iterations, the reconstructed result is visually very close to the unquantized version:

Smaller files, higher quality, can the popular Stable Diffusion compress images?

Reconstructed result (left : palettized representation with Floyd-Steinberg jitter; middle: denoising after four iterations; right: Ground Truth).

#While the results are very good, some artifacts are introduced, such as the glossy shadow on the center symbol above.

Although subjectively, the results of Stable Diffusion compressed images are much better than JPG and WebP, from the perspective of PSNR, SSIM and other indicators, Stable Diffusion has no obvious advantages.

As shown in the figure below, although Stable Diffusion as a codec is much better than other methods in retaining image granularity, it is affected by compression artifacts, the shape of objects in the image, etc. Characteristics subject to change.

Smaller files, higher quality, can the popular Stable Diffusion compress images?

Left: JPG compression; middle: Ground Truth; right: Stable Diffusion compression.

It is worth noting that the current Stable Diffusion v1.4 model cannot well preserve text information and facial features with small fonts during the compression process , but the Stable Diffusion v1.5 model has improved in face generation.

Smaller files, higher quality, can the popular Stable Diffusion compress images?

##Left: Ground Truth; Middle: after VAE roundtrip (32-bit latent features) ; Right: Results of decoding from palettized denoised 8-bit latent features.

After the blog was published, Matthias Bühlmann’s experimental analysis aroused everyone’s discussion.

Matthias Bühlmann himself believes that the image compression effect of Stable Diffusion is better than expected, and U-Net seems to be able to effectively eliminate the noise introduced by dithering. However, future versions of the Stable Diffusion model may no longer have this image compression feature.

Smaller files, higher quality, can the popular Stable Diffusion compress images?

However, some netizens questioned: "VAE itself is used for image compression." For example, the Transformer-based image compression method TIC uses VAE architecture, so Matthias Bühlmann's experiment seems to be overkill.

Smaller files, higher quality, can the popular Stable Diffusion compress images?

What do you think of this?

The above is the detailed content of Smaller files, higher quality, can the popular Stable Diffusion compress images?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template