Home Technology peripherals AI Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion

Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion

Jul 23, 2024 pm 02:05 PM
project

Currently, autoregressive large-scale language models using the next token prediction paradigm have become popular all over the world. At the same time, a large number of synthetic images and videos on the Internet have already shown us the power of diffusion models.

Recently, a research team at MIT CSAIL (one of whom is Chen Boyuan, a PhD student at MIT) successfully integrated the powerful capabilities of the full sequence diffusion model and the next token model, and proposed a training and sampling paradigm: Diffusion Forcing(DF).

Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
  • Paper title: Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion

  • Paper address: https://arxiv.org/pdf/2407.01392

  • Project website: https://arxiv.org/pdf/2407.01392 /boyuan.space/diffusion-forcing

  • Code address: https://github.com/buoyancy99/diffusion-forcing

As shown below, diffusion forcing clearly outperforms all in terms of consistency and stability Two methods are sequence diffusion and teacher forcing.

Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion

In this framework, each token is associated with a random, independent noise level, and a shared next token prediction model or next token prediction model can be used according to an arbitrary, independent, per-token scheme Denoise the token.

The research inspiration of this method comes from this observation: the process of adding noise to the token is a form of partial masking process - zero noise means that the token is not masked, while complete noise is completely Masking token. Therefore, DF forces the model to learn a mask that removes any variable set of noisy tokens (Figure 2).
Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
At the same time, by parameterizing the prediction method as a combination of multiple next token prediction models, the system can flexibly generate sequences of different lengths and generalize to new trajectories in a combinatorial manner (Figure 1 ).
Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
The team implemented the DF used for sequence generation into Causal Diffusion Forcing (CDF), in which future tokens depend on past tokens through a causal architecture. They trained the model to denoise all tokens of a sequence (where each token has an independent noise level) at once.

During sampling, CDF will gradually denoise a sequence of Gaussian noise frames into clean samples, where different frames may have different noise levels at each denoising step. Similar to the next token prediction model, CDF can generate sequences of variable length; unlike the next token prediction, CDF's performance is very stable - whether it is predicting the next token, thousands of tokens in the future, or even continuously token.

Additionally, similar to Full Sequence Diffusion, it can also receive guidance, allowing for high reward generation. By collaboratively leveraging causality, flexible scope, and variable noise scheduling, CDF enables a new feature: Monte Carlo Tree Guidance (MCTG). Compared with the non-causal full sequence diffusion model, MCTG can greatly improve the sampling rate of high reward generation. Figure 1 gives an overview of these capabilities.

Diffusion Forcing (diffusion forcing)

1. Treat the noise adding process as a partial mask

First of all, we can treat any token set (whether it is a sequence or not) as An ordered collection indexed by t. Then, using teacher forcing to train the next token prediction can be interpreted as masking out each token x_t at time t and predicting them based on the past x_{1:t−1}.

For sequences, this operation can be described as: performing masking along the timeline. We can think of full-sequence forward diffusion (i.e. the process of gradually adding noise to the data Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion) as a kind of partial masking, which can be called "performing masking along the noise axis".

In fact, after adding noise in K steps, Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion is (probably) white noise, and there is no longer any information about the original data. As shown in Figure 2, the team established a unified perspective to look at the edge. Masks for these two axes.

2. Diffusion forcing: Different tokens have different noise levels

The diffusion forcing (DF) framework can be used to train and sample noisy tokens of arbitrary sequence lengths
, where The key is that the noise level k_t of each token changes with time steps.

Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusionThis paper focuses on time series data, so they instantiate DF through a causal architecture, and thus get. Causal diffusion forcing (CDF). Simply put, this is a minimal implementation obtained using a basic recurrent neural network (RNN). An RNN with weight θ maintains a hidden state z_t that is informed of the influence of past tokens. It will evolve according to the dynamic
through a loop layer.When an input noise observation Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion is obtained, the hidden state is updated in a Markovian manner.

When k_t=0, this is the posterior update in Bayesian filtering; and when k_t=K (pure noise, no information), this is equivalent to modeling Bayesian filtering. "Posterior distribution" p_θ(z_t | z_{t−1}).

Given the hidden state z_t, the goal of the observation model p_θ(x_t^0 | z_t) is to predict x_t; the input-output behavior of this unit is the same as the standard conditional diffusion model: with the condition variable z_{t−1 } and noisy token as input, predict the noiseless x_t=x_t^0, and thereby indirectly predict the noise ε^{k_t} through affine reparameterization. Therefore, we can directly use the classic diffusion target to train (causal) diffusion forcing. According to the noise prediction result ε_θ, the above unit can be parameterized. Then, the parameters θ are found by minimizing the following loss:
Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
Algorithm 1 gives the pseudocode. The point is that this loss captures key elements of Bayesian filtering and conditional diffusion. The team also further re-inferred common techniques used in diffusion model training for diffusion forcing, as detailed in the appendix of the original paper. They also arrived at an informal theorem.
Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
Theorem 3.1 (informal). The diffusion-forced training procedure (Algorithm 1) is a reweighting that optimizes the evidence lower bound (ELBO) on the expected log-likelihood Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion, where the expected value is averaged over the noise level and Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion is noisy according to a forward process. In addition, under appropriate conditions, optimizing (3.1) can also maximize the lower likelihood limit of all noise level sequences simultaneously.

Diffusion forced sampling and the resulting capability

Algorithm 2 describes the sampling process, which is defined as: in a two-dimensional M × T grid K ∈ [K]^{M×T } specifies the noise schedule; where the columns correspond to time steps t and the rows indexed by m determine the noise level.
Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
To generate the entire sequence of length T, token x_{1:T} is first initialized to white noise, corresponding to the noise level k = K. It then iterates row-by-row down the grid and denoises column-by-column from left to right until the noise level reaches K. By the time m = 0 in the last row, the noise of the token has been cleaned up, that is, the noise level is K_{0,t} ≡ 0.

This sampling paradigm will bring the following new capabilities:

  • Stable autoregressive generation
  • Keep the future uncertain
  • Long-term guidance capability

Use diffusion forcing for flexible sequence decisions

The new ability of diffusion forcing also brings new possibilities. Based on this, the team designed a new framework for sequence decision-making (SDM) and successfully applied it to the fields of robots and autonomous agents.

First, define a Markov decision process with dynamic p (s_{t+1}|s_t, a_t), observation p (o_t|s_t) and reward p (r_t|s_t, a_t) . The goal here is to train a policy π(a_t|o_{1:t}) to maximize the expected cumulative reward of the trajectory Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion. Here the token x_t = [a_t, r_t, o_{t+1}] is allocated. A trajectory is a sequence x_{1:T}, whose length may be variable; the training method is as shown in Algorithm 1.

At each step t of the execution process, there is a hidden state z_{t-1} summarizing the past noise-free token x_{1:t-1}.Based on this hidden state, a plan Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion is sampled according to Algorithm 2, where Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion contains predicted actions, rewards and observations. H is a forward observation window, similar to future predictions in model predictive control. After taking the planned action, the environment gets a reward and the next observation, and thus the next token. The hidden state can be updated according to the posterior p_θ(z_t|z_{t−1}, x_t, 0).

The framework can be used as both a strategy and a planner, and its advantages include:

  • with flexible planning horizons
  • enables flexible reward guidance
  • can be achieved Monte Carlo Tree Guidance (MCTG) to achieve future uncertainty

Experiment

The team evaluated the advantages of diffusion forcing as a generative sequence model involving video and time series forecasting , planning and imitation learning and other applications.

Video prediction: consistent and stable sequence generation and infinite expansion

For the video generative modeling task, they trained a convolutional RNN for causal diffusion enforcement based on Minecraft game videos and DMLab navigation accomplish.

Figure 3 shows the qualitative results of diffusion forcing versus baseline.
Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
It can be seen that diffusion forcing can unfold stably, even beyond its training range; while teacher forcing and full sequence diffusion benchmarks will diverge quickly.

Diffusion planning: MCTG, causal uncertainty, flexible scope control

The ability to diffuse forcing can bring unique benefits to decision-making. The team evaluated the newly proposed decision-making framework using D4RL, a standard offline reinforcement learning framework.
Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
Table 1 gives the qualitative and quantitative evaluation results. As can be seen, Diffusion Enforcement outperforms Diffuser and all baselines in all 6 environments.
Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
Controllable sequence combination generation

The team found that it was possible to flexibly combine subsequences of sequences observed at training time simply by modifying the sampling scheme.

They conducted experiments using a 2D trajectory dataset: on a square plane, all trajectories start from one corner and end up at the opposite corner, forming a kind of cross shape.

As shown in Figure 1 above, when combination behavior is not required, DF can be allowed to maintain complete memory and replicate the distribution of the cross. When combination is required, the model can be used to generate shorter plans memorylessly using MPC, thereby stitching the sub-trajectories of this cross to obtain a V-shaped trajectory.

Robots: Long-range imitation learning and robust visual motion control

Diffusion forcing also brings new opportunities for visual motion control of real robots.

Imitation learning is a commonly used robot control technique that learns mappings of observed actions demonstrated by experts. However, a lack of memory often makes imitation learning difficult for long-range tasks. DF can not only alleviate this shortcoming, but also make imitation learning more robust.

Use memory for imitation learning. By remotely controlling the Franka robot, the team collected a video and motion data set. As shown in Figure 4, the task is to swap the positions of apples and oranges using the third position. The initial position of the fruit is random, so there are two possible goal states.
Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
Furthermore, when there is a fruit in the third position, the desired result cannot be inferred from the current observation - the strategy must remember the initial configuration in order to decide which fruit to move.Unlike commonly used behavior cloning methods, DF can naturally integrate memories into its own hidden state. It was found that DF achieved an 80% success rate, while the diffusion strategy (currently the best memoryless imitation learning algorithm) failed. Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion
In addition, DF can also deal with noise more robustly and facilitate robot pre-training.

Time Series Forecasting: Diffusion forcing is an excellent general sequence model

For multivariate time series forecasting tasks, the team’s research shows that DF is sufficient to compete with previous diffusion models and Transformer-based Model comparable.

Please refer to the original paper for more technical details and experimental results.

The above is the detailed content of Unlimited video generation, planning and decision-making, diffusion forced integration of next token prediction and full sequence diffusion. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1267
29
C# Tutorial
1239
24
The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

LLM is really not good for time series prediction. It doesn't even use its reasoning ability. LLM is really not good for time series prediction. It doesn't even use its reasoning ability. Jul 15, 2024 pm 03:59 PM

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

See all articles