


UC Berkeley Google innovates LLM, implements terminal diffusion model and uses it for IGN to generate realistic images in a single step, and American TV series become a source of inspiration
The diffusion model that has become popular in half the sky will be eliminated?
Currently, generative AI models, such as GAN, diffusion model or consistency model, generate images by mapping inputs to outputs corresponding to the target data distribution. The content that needs to be rewritten is :
Normally, this kind of model needs to learn a lot of real pictures, and then it can try to ensure the real features of the generated pictures. The content that needs to be rewritten is:
Recently, researchers from UC Berkeley and Google proposed a new generation model-Impotent Generative Network (IGN). The content that needs to be rewritten is:
Picture
Paper address: https://arxiv.org/abs/2311.01462
IGNs can be selected from a variety of Inputs, such as random noise, simple graphics, etc., generate realistic images in a single step without the need for multi-step iterations. What needs to be rewritten is:
This model aims to be A "global projector" can map any input data to the target data distribution. The content that needs to be rewritten is:
In short, the general image generation model must be What needs to be rewritten is this:
Interestingly, a highly effective scene in "Seinfeld" actually became the author's source of inspiration. What needs to be rewritten is:
Picture
This scene well summarizes the concept of "idempotent operator", which refers to During the operation, if the same input is repeatedly operated, the result will always be the same. The content that needs to be rewritten is:
, that is,
Picture
The content that needs to be rewritten is:
As Jerry Seinfeld humorously pointed out, some real-life behaviors can also be considered The idempotent content that needs to be rewritten is:
Impotent Generating Network
IGN has two important differences with GAN and diffusion model:
- Different from GAN, IGN does not require separate generators and discriminators. It is a "self-confrontation" model. The content that needs to be rewritten to complete generation and discrimination at the same time is:
- Unlike diffusion models that perform incremental steps, IGN attempts to map inputs to data distributions in a single step. What needs to be rewritten is:
What is the origin of IGN (idempotent generative model)?
It is trained to be from the source distribution Given the target distribution of the input samples
, the generated samples need to be rewritten The content is:
Given the example data set, each example is taken from The content is: Then, the researchers trained the model
to map
to
. The content that needs to be rewritten is:
Assume that the distributions and
are located in the same space, i.e. their instances have the same dimensions. What needs to be rewritten is: This allows
Applies to two types of instances
and
The content that needs to be rewritten is:
The figure shows the basic idea behind IGN: the real example (x) is invariant to the model fThe content that needs to be rewritten is: other inputs (z) are mapped to f By optimizing
, the content that needs to be rewritten on the instance stream mapped to itself is:
Picture
IGN training routine PyTorch code example that needs to be rewritten is:
##Picture
After getting IGN, what is the effect?
The author admits that at this stage, the generated results of IGN cannot compete with the most advanced models. The content that needs to be rewritten is:
At In the experiment, a smaller model and a lower-resolution data set were used, and the main focus in the exploration was on the simplified method. The content that needs to be rewritten is:
Of course, the basic generation Modeling technologies, such as GAN and diffusion models, also took a long time to achieve mature and large-scale performance. The content that needs to be rewritten is:
Experimental settings
The researchers evaluated IGN on MNIST (greyscale handwritten digits dataset) and CelebA (face image dataset), using image resolutions of 28×28 and 64×64 respectively. The content is:
The author uses a simple autoencoder architecture, where the encoder is a simple five-layer discriminator backbone from DCGAN, and the decoder is the generator. The content that needs to be rewritten is : The training and network hyperparameters are shown in Table 1. The content that needs to be rewritten is:
Picture
Generate result
Figure 4 shows the qualitative results for the two data sets after applying the model once and twice consecutively. What needs to be rewritten is:
As shown, applying IGN once (f (z)) will produce coherent generation results. What needs to be rewritten is: However, artifacts may occur, such as holes in MNIST digits, or the top of the head in facial images. The distorted pixels of hair and hair need to be rewritten:
Applying f (f (f (z))) again can correct these problems, fill holes, or reduce facial noise patches The total changes around what needs to be rewritten are:
Picture
Figure 7 shows the additional results and applying f three times As a result, the content that needs to be rewritten is:
Picture
##Comparing and
shows that when the image is close to the learned manifold When , applying f again results in minimal changes, as the image is considered distributed. What needs to be rewritten is:
Latent Space Manipulation
The author proves by performing operations that IGN has a consistent latent space, similar to that shown for GAN. Figure 6 shows that the latent space algorithm needs to be rewritten as:
Picture
Out-of-distribution mapping
The author also verified that by converting data from various distributions The image is input into the model to generate its equivalent "natural image" to verify the potential of IGN's "global mapping". The content that needs to be rewritten is:
The researchers passed the noisy image x n denoising, colorizing the grayscale image, and converting the sketch
to the real image in Figure 5 to prove this point needs to be rewritten is:
Original image x, these inverse tasks are ill-posed. What needs to be rewritten is: IGN can create a natural mapping that conforms to the original image structure. What needs to be rewritten is:
As shown, applying f continuously can improve image quality (for example, it removes dark and smoke artifacts in projected sketches) What needs to be rewritten is:
Pictures
It can be seen from the above results that IGN is more effective in inference and can generate results in a single step after training. The content that needs to be rewritten is:
They can also output more consistent results, which may be extended to more applications, such as medical image repair. The content that needs to be rewritten is:
The author of the paper stated:
We view this work as a first step toward models that learn to map arbitrary inputs to target distributions, a new paradigm in generative modeling that needs to be rewritten. The content is:
Next, the research team plans to expand the scale of IGN with more data, hoping to tap the full potential of new generative AI models that need to be rewritten. The content is:
The latest research code will be published on GitHub in the future. The content that needs to be rewritten is:
References:
https://www.php.cn/link/2bd388f731f26312bfc0fe30da009595
https://www .php.cn/link/e1e4e65fddf79af60aab04457a6565a6
The above is the detailed content of UC Berkeley Google innovates LLM, implements terminal diffusion model and uses it for IGN to generate realistic images in a single step, and American TV series become a source of inspiration. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Today I would like to share a recent research work from the University of Connecticut that proposes a method to align time series data with large natural language processing (NLP) models on the latent space to improve the performance of time series forecasting. The key to this method is to use latent spatial hints (prompts) to enhance the accuracy of time series predictions. Paper title: S2IP-LLM: SemanticSpaceInformedPromptLearningwithLLMforTimeSeriesForecasting Download address: https://arxiv.org/pdf/2403.05798v1.pdf 1. Large problem background model

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving
