Table of Contents
GAN is dead, the diffusion model lives on
Framework
Experiments and Impact
Home Technology peripherals AI HKUST & MSRA Research: Regarding image-to-image conversion, Finetuning is all you need

HKUST & MSRA Research: Regarding image-to-image conversion, Finetuning is all you need

May 04, 2023 pm 11:10 PM
image Model train

港科大&MSRA研究:关于图像到图像转换,Finetuning is all you need

Many content production projects require converting simple sketches into realistic pictures, which involves image-to-image translation, which uses deep generative model learning Conditional distribution of natural images given input.

The basic concept of image-to-image conversion is to utilize pre-trained neural networks to capture natural image manifolds. Image transformation is similar to traversing the manifold and locating feasible input semantic points. The system pre-trains the synthetic network using many images to provide reliable output from any sampling of its latent space. Through the pre-trained synthetic network, downstream training adapts user input to the model’s latent representation.

Over the years, we have seen many task-specific methods reach SOTA level, but current solutions still struggle to create high-fidelity images for real-world use.

港科大&MSRA研究:关于图像到图像转换,Finetuning is all you need

In a recent paper, researchers from Hong Kong University of Science and Technology and Microsoft Research Asia believe that for image-to-image conversion, pre-training is All you need . Previous methods require specialized architecture design and training a single transformation model from scratch, making it difficult to generate complex scenes with high quality, especially when paired training data is insufficient.

Therefore, the researchers treat each image-to-image translation problem as a downstream task and introduce a simple general framework that adopts a pre-trained diffusion model to adapt to various image-to-image translations. They called the proposed pre-trained image-to-image translation model PITI (pretraining-based image-to-image translation). In addition, the researchers also proposed to use adversarial training to enhance texture synthesis in diffusion model training, and combine it with normalized guided sampling to improve the generation quality.

Finally, the researchers conducted extensive empirical comparisons on various tasks on challenging benchmarks such as ADE20K, COCO-Stuff, and DIODE, demonstrating that PITI-synthesized images display unprecedented realism and fidelity. Spend.

港科大&MSRA研究:关于图像到图像转换,Finetuning is all you need

  • Paper link: https://arxiv.org/pdf/2205.12952.pdf
  • Project homepage: https://tengfei-wang .github.io/PITI/index.html

GAN is dead, the diffusion model lives on

The author did not use the GAN that performed best in a specific field, but used the diffusion model , synthesizes a wide variety of images. Second, it should generate images from two types of latent codes: one that describes visual semantics and another that adjusts for image fluctuations. Semantic, low-dimensional latent is critical for downstream tasks. Otherwise, it would be impossible to transform the modal input into a complex latent space. Given this, they used GLIDE, a data-driven model that can generate different images, as a pretrained generative prior. Since GLIDE uses latent text, it allows for a semantic latent space.

Diffusion and score-based methods demonstrate generation quality across benchmarks. On class-conditional ImageNet, these models compete with GAN-based methods in terms of visual quality and sampling diversity. Recently, diffusion models trained with large-scale text-image pairings have shown surprising capabilities. A well-trained diffusion model can provide a general generative prior for synthesis.

港科大&MSRA研究:关于图像到图像转换,Finetuning is all you need

Framework

The author can use the pretext task to pre-train on large amounts of data and develop a very meaningful latent space to predict images statistics.

For downstream tasks, they conditionally fine-tune the semantic space to map task-specific environments. The machine creates believable visuals based on pre-trained information.

The author recommends using semantic input to pre-train the diffusion model. They use text-conditioned,image-trained GLIDE model. The Transformer network encodes text input and outputs tokens for the diffusion model. As planned, it makes sense for the text to be embedded in the space.

港科大&MSRA研究:关于图像到图像转换,Finetuning is all you need

The picture above is the author’s work. Pretrained models improve image quality and diversity compared to techniques from scratch. Since the COCO dataset has numerous categories and combinations, the basic approach cannot provide beautiful results with a compelling architecture. Their method can create rich details with precise semantics for difficult scenes. Pictures illustrate the versatility of their approach.

Experiments and Impact

Table 1 shows that the performance of the method proposed in this study is always better than other models. Compared with the leading OASIS, PITI achieves significant improvements in FID in mask-to-image synthesis. Furthermore, the method also shows good performance in sketch-to-image and geometry-to-image synthesis tasks.

港科大&MSRA研究:关于图像到图像转换,Finetuning is all you need

# Figure 3 shows the visualization results of this study on different tasks. Experiments show that compared with the method of training from scratch, the pre-trained model significantly improves the quality and diversity of generated images. The methods used in this study can produce vivid details and correct semantics even for challenging generation tasks.

港科大&MSRA研究:关于图像到图像转换,Finetuning is all you need

The research also conducted a user study on mask-to-image synthesis on COCO-Stuff on Amazon Mechanical Turk, with 3000 results from 20 participants ticket. Participants were given two images at a time and asked to vote on which one was more realistic. As shown in Table 2, the proposed method outperforms the model from scratch and other baselines by a large extent.

港科大&MSRA研究:关于图像到图像转换,Finetuning is all you need

Conditional image synthesis creates high-quality pictures that meet certain conditions. The fields of computer vision and graphics use it to create and manipulate information. Large-scale pretraining improves image classification, object recognition, and semantic segmentation. What is unknown is whether large-scale pretraining is beneficial for general generation tasks.

Energy usage and carbon emissions are key issues in image pre-training. Pre-training is energy-intensive, but only required once. Conditional fine-tuning allows downstream tasks to use the same pre-trained model. Pre-training allows generative models to be trained with less training data, improving image synthesis when data is limited due to privacy issues or expensive annotation costs.

The above is the detailed content of HKUST & MSRA Research: Regarding image-to-image conversion, Finetuning is all you need. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks Apr 29, 2024 pm 06:55 PM

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Jun 11, 2024 am 09:51 AM

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. May 07, 2024 pm 05:00 PM

Recently, the military circle has been overwhelmed by the news: US military fighter jets can now complete fully automatic air combat using AI. Yes, just recently, the US military’s AI fighter jet was made public for the first time and the mystery was unveiled. The full name of this fighter is the Variable Stability Simulator Test Aircraft (VISTA). It was personally flown by the Secretary of the US Air Force to simulate a one-on-one air battle. On May 2, U.S. Air Force Secretary Frank Kendall took off in an X-62AVISTA at Edwards Air Force Base. Note that during the one-hour flight, all flight actions were completed autonomously by AI! Kendall said - "For the past few decades, we have been thinking about the unlimited potential of autonomous air-to-air combat, but it has always seemed out of reach." However now,

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

See all articles