Home > Technology peripherals > AI > Efficiency crushes DALL·E 2 and Imagen, Google's new model achieves new SOTA, and can also handle PS in one sentence

Efficiency crushes DALL·E 2 and Imagen, Google's new model achieves new SOTA, and can also handle PS in one sentence

王林
Release: 2023-04-11 13:49:03
forward
1613 people have browsed it

At the beginning of the new year, Google AI has begun to work on text-image generation models again.

This time, their new model Muse reached a new SOTA (currently the best level) on the CC3M data set.

And its efficiency far exceeds that of the globally popular DALL·E 2 and Imagen (both of which are diffusion models), as well as Parti (which is an autoregressive model).

——The generation time of a single 512x512 resolution image is compressed to only 1.3 seconds.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

#In terms of image editing, you can edit the original image with just a text command.

(Looks like you no longer have to worry about learning PS~)

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

If you want the effect to be more precise, you can also select the mask position and edit specific area. For example, replace the buildings in the background with hot air balloons.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Once Muse was officially announced, it quickly attracted a lot of attention. The original post has already received 4,000 likes.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Seeing another masterpiece from Google, some people have even begun to predict:

The competition among AI developers is very fierce now. It seems that 2023 It's going to be a really exciting year.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence
Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

More efficient than DALL·E 2 and Imagen

Let’s talk about the Muse just released by Google.

First of all, in terms of the quality of the generated images, most of Muse’s works have clear images and natural effects.

Let’s take a look at more examples to get a feel for it~

For example, a sloth baby wearing a woolen hat is operating a computer; another example is a sheep in a wine glass:

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Various subjects that are usually out of reach coexist harmoniously in one picture without any sense of dissonance.

If you think these can only be regarded as the basic operations of AIGC, then you might as well take a look at the editing function of Muse.

For example, one-click outfit change (you can also change gender):

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

This does not require any masking and can be done in one sentence.

And if you use a mask, you can achieve 6 more operations, including switching the background with one click, from the original place to New York, Paris, and then to San Francisco.


You can also go from the seaside to London, to the sea of ​​​​flowers, or even fly to the rings of Saturn in space to play the exciting skateboard dolphin jump.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

# (Good guy, not only can you easily travel in the cloud, but you can also fly to the sky with one click...)

The effect is really outstanding. So what technical support is behind Muse? Why is the efficiency higher than DALL·E 2 and Imagen?

An important reason is that DALL·E 2 and Imagen need to store all learned knowledge in the model parameters during the training process.

As a result, they have to require larger and larger models and more and more training data to obtain more knowledge - tying Better and Bigger together.

The cost is that the number of parameters is huge and the efficiency is also affected.

According to the Google AI team, the main method they use is called: Masked image modeling.

This is an emerging self-supervised pre-training method. Its basic idea is simply:

Parts of the input image are randomly masked out and then reconstructed using a pre-trained text task.

Muse models are trained on spatial masks of discrete labels and combined with text extracted from pre-trained language large models to predict randomly masked image labels.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

From top to bottom: pre-trained text encoder, basic model, super-resolution model

The Google team found that using pre-trained The large language model can make AI's understanding of language more detailed and thorough.

As far as output is concerned, because AI has a good grasp of the spatial relationship, posture and other elements of objects, the generated images can be high-fidelity.

Compared with pixel space diffusion models such as DALL·E 2 and Imagen, Muse uses discrete tokens and has fewer sampling iterations.

In addition, compared with autoregressive models such as Parti, Muse uses parallel decoding, which is more efficient.

SOTA score on FID

As mentioned earlier, Muse has not only improved efficiency, but is also very good in generating image quality.

The researchers compared it with DALL·E, LAFITE, LDM, GLIDE, DALL·E 2, as well as Google's own Imagen and Parti, and tested their FID and CLIP scores.

(FID score is used to evaluate the quality of the generated image. The lower the score, the higher the quality; the CLIP score represents the degree of fit between the text and the image. The higher the score, the better.)

Result display , the Muse-3B model’s zero-shot FID-30K score in the COCO validation set is 7.88, second only to the Imagen-3.4B and Parti-20B models with larger parameters.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Even better, the Muse-900M model achieved a new SOTA on the CC3M data set, with an FID score of 6.06, which also means that it matches the text is the highest.

At the same time, the CLIP score of this model was 0.26, which also reached the highest level in the same period.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

In addition, in order to further confirm Muse’s image generation efficiency, the researchers also compared the single image generation time of Muse and other models:

Muse reached the fastest speed at 256x256 and 512x512 resolutions: 0.5s and 1.3s.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Research Team

Muse’s research team comes from Google, and the two co-authors are Huiwen Chang and Han Zhang.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Huiwen Chang is currently a senior researcher at Google.

She studied as an undergraduate at Tsinghua University and received her PhD from Princeton University. She has had internship experience at Adobe, Facebook, etc.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Han Zhang received his undergraduate degree from China Agricultural University, his master's degree from Beijing University of Posts and Telecommunications, and his PhD in computer science from Rutgers University.

Its research directions are computer vision, deep learning and medical image analysis.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

However, it is worth mentioning that Muse has not been officially released yet.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Some netizens joked that although it should be very fragrant, with Google’s “uric nature”, Muse may still be a long time away from its official release - after all, they still have AI hasn’t been released in 18 years.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Speaking of which, what do you think of the effect of Muse?

Are you looking forward to its official release?

Portal:​​https://www.php.cn/link/854f1fb6f65734d9e49f708d6cd84ad6​

Reference link: https://twitter.com/AlphaSignalAI/status/ 1610404589966180360​

The above is the detailed content of Efficiency crushes DALL·E 2 and Imagen, Google's new model achieves new SOTA, and can also handle PS in one sentence. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template