Home > Technology peripherals > AI > body text

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

WBOY
Release: 2023-04-12 16:49:10
forward
1222 people have browsed it

An important goal of multi-modal research is to improve the machine's ability to understand images and text. In particular, researchers have made great efforts on how to achieve meaningful communication between the two models. For example, image captioning generation should be able to convert the semantic content of the image into coherent text that can be understood by humans. In contrast, text-image generative models can also exploit the semantics of textual descriptions to create realistic images.

This leads to some interesting questions related to semantics: For a given image, which textual description most accurately describes the image? Likewise, for a given text, what is the most meaningful way to implement an image? Regarding the first question, some studies claim that the best image description should be information that is both natural and can restore the visual content. As for the second question, meaningful images should be of high quality, diverse and faithful to the text content.

Either way, driven by human communication, interactive tasks involving text-image generative models and image-text generative models can help us select the most accurate image-text pairs.

As shown in Figure 1, in the first task, the image-text model is the information sender, and the text-image model is the information receiver. The sender's goal is to communicate the content of the image to the receiver using natural language so that it understands the language and reconstructs a realistic visual representation. Once the receiver can reconstruct the original image information with high fidelity, it indicates that the information has been successfully transferred. Researchers believe that the text description generated in this way is optimal, and the image generated through it is also most similar to the original image.

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

This rule is inspired by people’s use of language to communicate. Imagine the following scenario: In an emergency call scene, the police learn about the car accident and the status of the injured through the phone. This essentially involves the process of image description by witnesses at the scene. The police need to mentally reconstruct the environmental scene based on the verbal description in order to organize an appropriate rescue operation. Obviously, the best textual description should be the best guide to the reconstruction of the scene.

The second task involves text reconstruction: the text-image model becomes the message sender, and the image-text model becomes the message receiver. Once the two models agree on the content of the information at the textual level, the image medium used to convey the information is the optimal image that reproduces the source text.

In this article, the method proposed by researchers from the University of Munich, Siemens and other institutions is closely related to communication between agents. Language is the primary method for exchanging information between agents. But how can we be sure that the first agent and the second agent have the same understanding of what is a cat or what is a dog?

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

##Paper address: https://arxiv.org/pdf/2212.12249.pdf

The idea that this article wants to explore is to have the first agent analyze the image and generate text describing the image, and then the second agent obtains the text and simulates the image based on it. Among them, the latter process can be considered as a process of embodiment. The study believes that communication is successful if the image simulated by the second agent is similar to the input image received by the first agent (see Figure 1). ​

In the experiments, this study used off-the-shelf models, especially recently developed large-scale pre-trained models. For example, Flamingo and BLIP are image description models that can automatically generate text descriptions based on images. Likewise, image generation models trained on image-text pairs can understand the deep semantics of text and synthesize high-quality images, such as the DALL-E model and the latent diffusion model (SD).

Additionally, the study leveraged the CLIP model to compare images or text. CLIP is a visual language model that maps images and text in a shared embedding space. This study uses manually created image text datasets such as COCO and NoCaps to evaluate the quality of the generated text. Image and text generative models have stochastic components that allow sampling from a distribution, thus selecting the best from a range of candidate texts and images. Different sampling methods, including kernel sampling, can be used in image description models, and this article uses kernel sampling as the basic model to show the superiority of the method used in this article.

Method Overview

The framework of this article consists of three pre-trained SOTA neural networks. First, the image-text generation model; second, the text-image generation model; third, the multi-modal representation model consisting of an image encoder and a text encoder, which can map images or texts into their semantic embeddings respectively. .

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

Image reconstruction through text description

Figure 2 Left half As shown in the section, the image reconstruction task is to reconstruct a source image using language as instructions, and the implementation of this process will lead to the generation of optimal text describing the source scene. First, a source image x is fed to the BLIP model to generate multiple candidate texts y_k. For example, a red panda eats leaves in the woods. The generated set of text candidates is denoted by C, and then the text y_k is sent to the SD model to generate the image x’_k. Here x’_k refers to the image generated based on red panda. Subsequently, the CLIP image encoder is used to extract semantic features from the source and generated images: Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text and Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text.

Then the cosine similarity between these two embedding vectors is calculated with the purpose of finding the candidate text description y_s, i.e.

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

where s is the image index closest to the source image.

The study uses CIDEr (Image Description Metric) and references human annotations to evaluate the best texts. Since we were interested in the quality of the generated text, this study set the BLIP model to output text of approximately the same length. This ensures a relatively fair comparison, since the length of the text is positively correlated with the amount of information in the image that can be conveyed. During this work, all models are frozen and no fine-tuning is performed.

Text reconstruction through images

The right part of Figure 2 shows the reverse of the process described in the previous section. The BLIP model requires guessing the source text guided by an SD, which has access to the text but can only render its content in the form of an image. The process starts by using SD to generate candidate images x_k for the text y, and the resulting set of candidate images is denoted by K. Generating images using SD involves a random sampling process, where each generation process may end up with different valid image samples in a huge pixel space. This sampling diversity provides a pool of candidates to filter out the best images. The BLIP model then generates a text description y’_k for each sampled image x_k. Here y’_k refers to the initial text A red panda crawling in the forest. The study then used the CLIP text encoder to extract features of the source text and generated text, represented by Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text and Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text, respectively. The purpose of this task is to find the best candidate image x_s that matches the semantics of the text y. To do this, the study needs to compare the distance between the generated text and the input text, and then select the image with the smallest distance between the paired texts, i.e.

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

The study believes that the image x_s can best depict the text description y because it can deliver the content to the receiver with minimal information loss. Furthermore, this study treats the image Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text corresponding to the text y as a reference presentation of y, and quantifies the best image as its proximity to the reference image.

Experimental Results

The left chart in Figure 3 shows the correlation between image reconstruction quality and description text quality on the two datasets. For each given image, the better the reconstructed image quality (shown in the x-axis), the better the text description quality (shown in the y-axis).

The right graph of Figure 3 reveals the relationship between the quality of the recovered text and the quality of the generated image: for each given text, the reconstructed text description (shown at x The better the y-axis), the better the image quality (shown on the y-axis).

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

Figure 4 (a) and (b) shows the relationship between the image reconstruction quality and the average text quality based on the source image. relationship between. Figure 4(c) and (d) show the correlation between text distance and reconstructed image quality.

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

Table 1 shows that the study’s sampling method outperforms kernel sampling in every metric, and the model’s The relative gain can be as high as 7.7%.

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

Figure 5 shows qualitative examples of two reconstruction tasks.

Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text

The above is the detailed content of Can DALL-E and Flamingo understand each other? Three pre-trained SOTA neural networks unify images and text. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template