Table of Contents
Fine-tuning DiT to improve text rendering capabilities
Reweighted flow technology to continuously improve performance
The model capability can be further improved
Netizens: The open source commitment has been fulfilled as scheduled, thank you
Home Technology peripherals AI Stable Diffusion 3 technical report released: revealing the same architecture details of Sora

Stable Diffusion 3 technical report released: revealing the same architecture details of Sora

Mar 07, 2024 pm 12:01 PM
sd3 mmdit Vincent diagram model

Very soon, the technical report of Stable Diffusion 3, the “new king of Vincentian graphics”, is here.

The full text has a total of 28 pages and is full of sincerity.

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

"Old rules", the promotional poster (⬇️) is directly generated with the model, and then shows off the text rendering ability:

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

So, how does SD3 light up its text and command following skills, which are stronger than DALL·E 3 and Midjourney v6?

Technical report reveals:

It all relies on the multi-modal diffusion Transformer architecture MMDiT.

By applying different sets of weights to image and text representations, a more powerful performance improvement than previous versions was achieved, which is the key to success.

Let’s open the report to see the details.

Fine-tuning DiT to improve text rendering capabilities

At the beginning of the release of SD3, the official revealed that its architecture has the same origin as Sora and is a diffusion Transformer-DiT.

Now the answer is revealed:

Since the Vincent graph model needs to consider both text and image modes, Stability AI goes one step further than DiT and proposes a new architecture MMDiT.

The "MM" here refers to "multimodal".

Like previous versions of Stable Diffusion, the official uses two pre-trained models to obtain suitable text and image representations.

The encoding of text representation is done using three different text embedders (embedders), including two CLIP models and a T5 model.

The encoding of the image token is completed using an improved autoencoder model.

Since text and image embedding are conceptually not the same thing, SD3 uses two sets of independent weights for these two modes.

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

(Some netizens complained: This architecture diagram seems to start the "Human Completion Project", um, yes, some people just "saw "Neon Genesis Evangelion" I just clicked on the information to enter this report")

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

Getting back to the point, as shown in the figure above, this is equivalent to having two independent transformers for each modality, but the Their sequences are concatenated for attentional operations.

This way both representations can work in their own space while still taking the other into account.

Ultimately, through this method, information can "flow" between images and text tokens, improving the model's overall understanding and text rendering capabilities during output.

And as shown previously, this architecture can be easily extended to video and other modes.

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

Specific tests show that MMDiT is better than DiT out of DiT:

It has both visual fidelity and text alignment during the training process. Better than existing text-to-image backbones, such as UViT and DiT.

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

Reweighted flow technology to continuously improve performance

At the beginning of the release, in addition to the diffusion Transformer architecture, the official also revealed that SD3 incorporates flow matching.

What "flow"?

As revealed in the title of the paper released today, SD3 uses "Rectified Flow" (RF).

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

This is an "extremely simplified, one-step generation" new diffusion model generation method, which was selected for ICLR2023.

It enables the model's data and noise to be connected in a linear trajectory during training, resulting in a more "straight" inference path that can use fewer steps for sampling.

Based on RF, SD3 introduces a new trajectory sampling during the training process.

It focuses on giving more weight to the middle part of the trajectory, because the author assumes that these parts will complete more challenging prediction tasks.

Testing this generation method against 60 other diffusion trajectory methods (such as LDM, EDM and ADM) across multiple datasets, metrics and sampler configurations found:

While the previous RF Methods show good performance in few-step sampling schemes, but their relative performance decreases as the number of steps increases.

In contrast, the SD3 reweighted RF variant consistently improves performance.

The model capability can be further improved

The official conducted a scaling study on text-to-image generation using the reweighted RF method and MMDiT architecture.

Trained models range from 15 modules with 450 million parameters to 38 modules with 8 billion parameters.

They observed from this: As the model size and training steps increase, the validation loss shows a smooth downward trend, that is, the model adapts to more complex data through continuous learning.

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

To test whether this translated into more meaningful improvements in model output, we also evaluated the automatic image alignment metric (GenEval) as well as humans Preference Rating (ELO) .

The result is:

There is a strong correlation between the two. That is, verification loss can be used as a very powerful indicator to predict overall model performance.

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

In addition, since the expansion trend here shows no signs of saturation (that is, as the model size increases, the performance is still improving and has not reached its limit), the official is optimistic Said:

The performance of SD3 can continue to improve in the future.

Finally, the technical report also mentions the issue of text encoders:

By removing the 4.7 billion parameter, memory-intensive T5 text encoder used for inference, the memory requirements of SD3 can be significantly reduced Reduced, but at the same time, the performance loss is very small (win rate dropped from 50% to 46%).

However, for the sake of text rendering capabilities, officials still recommend not to remove T5, because without it, the win rate of text representation will drop to 38%.

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

So to summarize: among the three text encoders of SD3, T5 makes the greatest contribution when generating images with text (and highly detailed scene description images) .

Netizens: The open source commitment has been fulfilled as scheduled, thank you

As soon as the SD3 report came out, many netizens said:

Stability AI’s commitment to open source has been fulfilled as scheduled. It's a pleasure and I hope they can continue to operate for a long time.

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

There are still people who have just announced the name of OpenAI:

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

What’s even more gratifying is that there are people who are The comment area mentioned:

All the weights of the SD3 model can be downloaded. Currently, 800 million parameters, 2 billion parameters and 8 billion parameters are planned.

Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节
Stable Diffusion 3技术报告出炉:揭露Sora同款架构细节

How is the speed?

Ahem, the technical report mentioned:

8 billion SD3 takes 34s to generate a 1024*1024 image on a 24GB RTX 4090 (50 sampling steps)——But this is just an early preliminary inference test result without optimization.

Full text of the report: https://stabilityai-public-packages.s3.us-west-2.amazonaws.com/Stable Diffusion 3 Paper.pdf.
Reference link:
[1]https://stability.ai/news/stable-diffusion- 3-research-paper.
[2]https://news.ycombinator.com/item?id=39599958.

The above is the detailed content of Stable Diffusion 3 technical report released: revealing the same architecture details of Sora. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

What is Model Context Protocol (MCP)? What is Model Context Protocol (MCP)? Mar 03, 2025 pm 07:09 PM

The Model Context Protocol (MCP): A Universal Connector for AI and Data We're all familiar with AI's role in daily coding. Replit, GitHub Copilot, Black Box AI, and Cursor IDE are just a few examples of how AI streamlines our workflows. But imagine

Building a Local Vision Agent using OmniParser V2 and OmniTool Building a Local Vision Agent using OmniParser V2 and OmniTool Mar 03, 2025 pm 07:08 PM

Microsoft's OmniParser V2 and OmniTool: Revolutionizing GUI Automation with AI Imagine AI that not only understands but also interacts with your Windows 11 interface like a seasoned professional. Microsoft's OmniParser V2 and OmniTool make this a re

I Tried Vibe Coding with Cursor AI and It's Amazing! I Tried Vibe Coding with Cursor AI and It's Amazing! Mar 20, 2025 pm 03:34 PM

Vibe coding is reshaping the world of software development by letting us create applications using natural language instead of endless lines of code. Inspired by visionaries like Andrej Karpathy, this innovative approach lets dev

Runway Act-One Guide: I Filmed Myself to Test It Runway Act-One Guide: I Filmed Myself to Test It Mar 03, 2025 am 09:42 AM

This blog post shares my experience testing Runway ML's new Act-One animation tool, covering both its web interface and Python API. While promising, my results were less impressive than expected. Want to explore Generative AI? Learn to use LLMs in P

Replit Agent: A Guide With Practical Examples Replit Agent: A Guide With Practical Examples Mar 04, 2025 am 10:52 AM

Revolutionizing App Development: A Deep Dive into Replit Agent Tired of wrestling with complex development environments and obscure configuration files? Replit Agent aims to simplify the process of transforming ideas into functional apps. This AI-p

Top 5 GenAI Launches of February 2025: GPT-4.5, Grok-3 & More! Top 5 GenAI Launches of February 2025: GPT-4.5, Grok-3 & More! Mar 22, 2025 am 10:58 AM

February 2025 has been yet another game-changing month for generative AI, bringing us some of the most anticipated model upgrades and groundbreaking new features. From xAI’s Grok 3 and Anthropic’s Claude 3.7 Sonnet, to OpenAI’s G

How to Use YOLO v12 for Object Detection? How to Use YOLO v12 for Object Detection? Mar 22, 2025 am 11:07 AM

YOLO (You Only Look Once) has been a leading real-time object detection framework, with each iteration improving upon the previous versions. The latest version YOLO v12 introduces advancements that significantly enhance accuracy

Elon Musk & Sam Altman Clash over $500 Billion Stargate Project Elon Musk & Sam Altman Clash over $500 Billion Stargate Project Mar 08, 2025 am 11:15 AM

The $500 billion Stargate AI project, backed by tech giants like OpenAI, SoftBank, Oracle, and Nvidia, and supported by the U.S. government, aims to solidify American AI leadership. This ambitious undertaking promises a future shaped by AI advanceme

See all articles