


Efficiency crushes DALL·E 2 and Imagen, Google's new model achieves new SOTA, and can also handle PS in one sentence
At the beginning of the new year, Google AI has begun to work on text-image generation models again.
This time, their new model Muse reached a new SOTA (currently the best level) on the CC3M data set.
And its efficiency far exceeds that of the globally popular DALL·E 2 and Imagen (both of which are diffusion models), as well as Parti (which is an autoregressive model).
——The generation time of a single 512x512 resolution image is compressed to only 1.3 seconds.
#In terms of image editing, you can edit the original image with just a text command.
(Looks like you no longer have to worry about learning PS~)
If you want the effect to be more precise, you can also select the mask position and edit specific area. For example, replace the buildings in the background with hot air balloons.
Once Muse was officially announced, it quickly attracted a lot of attention. The original post has already received 4,000 likes.
Seeing another masterpiece from Google, some people have even begun to predict:
The competition among AI developers is very fierce now. It seems that 2023 It's going to be a really exciting year.
More efficient than DALL·E 2 and Imagen
Let’s talk about the Muse just released by Google.
First of all, in terms of the quality of the generated images, most of Muse’s works have clear images and natural effects.
Let’s take a look at more examples to get a feel for it~
For example, a sloth baby wearing a woolen hat is operating a computer; another example is a sheep in a wine glass:
Various subjects that are usually out of reach coexist harmoniously in one picture without any sense of dissonance.
If you think these can only be regarded as the basic operations of AIGC, then you might as well take a look at the editing function of Muse.
For example, one-click outfit change (you can also change gender):
This does not require any masking and can be done in one sentence.
And if you use a mask, you can achieve 6 more operations, including switching the background with one click, from the original place to New York, Paris, and then to San Francisco.
You can also go from the seaside to London, to the sea of flowers, or even fly to the rings of Saturn in space to play the exciting skateboard dolphin jump.
# (Good guy, not only can you easily travel in the cloud, but you can also fly to the sky with one click...)
The effect is really outstanding. So what technical support is behind Muse? Why is the efficiency higher than DALL·E 2 and Imagen?
An important reason is that DALL·E 2 and Imagen need to store all learned knowledge in the model parameters during the training process.
As a result, they have to require larger and larger models and more and more training data to obtain more knowledge - tying Better and Bigger together.
The cost is that the number of parameters is huge and the efficiency is also affected.
According to the Google AI team, the main method they use is called: Masked image modeling.
This is an emerging self-supervised pre-training method. Its basic idea is simply:
Parts of the input image are randomly masked out and then reconstructed using a pre-trained text task.
Muse models are trained on spatial masks of discrete labels and combined with text extracted from pre-trained language large models to predict randomly masked image labels.
From top to bottom: pre-trained text encoder, basic model, super-resolution model
The Google team found that using pre-trained The large language model can make AI's understanding of language more detailed and thorough.
As far as output is concerned, because AI has a good grasp of the spatial relationship, posture and other elements of objects, the generated images can be high-fidelity.
Compared with pixel space diffusion models such as DALL·E 2 and Imagen, Muse uses discrete tokens and has fewer sampling iterations.
In addition, compared with autoregressive models such as Parti, Muse uses parallel decoding, which is more efficient.
SOTA score on FID
As mentioned earlier, Muse has not only improved efficiency, but is also very good in generating image quality.
The researchers compared it with DALL·E, LAFITE, LDM, GLIDE, DALL·E 2, as well as Google's own Imagen and Parti, and tested their FID and CLIP scores.
(FID score is used to evaluate the quality of the generated image. The lower the score, the higher the quality; the CLIP score represents the degree of fit between the text and the image. The higher the score, the better.)
Result display , the Muse-3B model’s zero-shot FID-30K score in the COCO validation set is 7.88, second only to the Imagen-3.4B and Parti-20B models with larger parameters.
Even better, the Muse-900M model achieved a new SOTA on the CC3M data set, with an FID score of 6.06, which also means that it matches the text is the highest.
At the same time, the CLIP score of this model was 0.26, which also reached the highest level in the same period.
In addition, in order to further confirm Muse’s image generation efficiency, the researchers also compared the single image generation time of Muse and other models:
Muse reached the fastest speed at 256x256 and 512x512 resolutions: 0.5s and 1.3s.
Research Team
Muse’s research team comes from Google, and the two co-authors are Huiwen Chang and Han Zhang.
Huiwen Chang is currently a senior researcher at Google.
She studied as an undergraduate at Tsinghua University and received her PhD from Princeton University. She has had internship experience at Adobe, Facebook, etc.
Han Zhang received his undergraduate degree from China Agricultural University, his master's degree from Beijing University of Posts and Telecommunications, and his PhD in computer science from Rutgers University.
Its research directions are computer vision, deep learning and medical image analysis.
However, it is worth mentioning that Muse has not been officially released yet.
Some netizens joked that although it should be very fragrant, with Google’s “uric nature”, Muse may still be a long time away from its official release - after all, they still have AI hasn’t been released in 18 years.
Speaking of which, what do you think of the effect of Muse?
Are you looking forward to its official release?
Portal:https://www.php.cn/link/854f1fb6f65734d9e49f708d6cd84ad6
Reference link: https://twitter.com/AlphaSignalAI/status/ 1610404589966180360
The above is the detailed content of Efficiency crushes DALL·E 2 and Imagen, Google's new model achieves new SOTA, and can also handle PS in one sentence. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

Project link written in front: https://nianticlabs.github.io/mickey/ Given two pictures, the camera pose between them can be estimated by establishing the correspondence between the pictures. Typically, these correspondences are 2D to 2D, and our estimated poses are scale-indeterminate. Some applications, such as instant augmented reality anytime, anywhere, require pose estimation of scale metrics, so they rely on external depth estimators to recover scale. This paper proposes MicKey, a keypoint matching process capable of predicting metric correspondences in 3D camera space. By learning 3D coordinate matching across images, we are able to infer metric relative

With the arrival of spring, everything revives and everything is full of vitality and vitality. In this beautiful season, how to add a touch of color to your home life? Haqu H2 projector, with its exquisite design and super cost-effectiveness, has become an indispensable beauty in this spring. This H2 projector is compact yet stylish. Whether placed on the TV cabinet in the living room or next to the bedside table in the bedroom, it can become a beautiful landscape. Its body is made of milky white matte texture. This design not only makes the projector look more advanced, but also increases the comfort of the touch. The beige leather-like material adds a touch of warmth and elegance to the overall appearance. This combination of colors and materials not only conforms to the aesthetic trend of modern homes, but also can be integrated into
