Recently, the large model research team of Zhiyuan Research Institute has open sourced the latest bilingual AltDiffusion model, bringing a strong impetus for professional-level AI text and graphics creation to the Chinese world:
Support Fine and long Chinese Prompts are advanced creations; no cultural translation is required, from the original Chinese language to Chinese painting with both form and spirit; and it has reached a low threshold in painting level, Chinese and English are aligned, the original Stable Diffusion level shocking visual effects, it can be said to be a world-class Chinese speaker AI painting master.
Innovative model AltCLIP is the cornerstone of this work, complementing the original CLIP model with three stronger cross-language capabilities. Both AltDiffusion and AltCLIP models are multi-language models. Chinese and English bilingualism are the first stage of work, and the code and models have been open source.
AltDiffusion
##https://github.com/FlagAI-Open/FlagAI/tree/ master/examples/AltDiffusion
AltCLIP
##https://github.com/ FlagAI-Open/FlagAI/examples/AltCLIP
HuggingFace space trial address:
https://huggingface.co/spaces/BAAI/bilingual_stable_diffusion
Technical Report
https://arxiv.org/abs/2211.06679Professional Chinese AltDiffusion
——Long Prompt fine painting with native Chinese style, meeting the high needs of Chinese AI creation mastersBenefit from the powerful Chinese and English bilingual alignment based on AltCLIP Ability, AltDiffusion has reached a level of visual effects similar to Stable Diffusion. In particular, it has the unique advantage of being better at understanding Chinese and being better at Chinese painting. It is very worthy of the expectations of professional Chinese AI text and picture creators.
1. Long Prompt generation, the picture effect is not inferior
The length of Prompt is the watershed to test the model's ability to generate text and images. The longer the Prompt, the more difficult it is to test language understanding. , image and text alignment and cross-language capabilities.Under the same Chinese and English long prompt input adjustments, AltDiffusion is even more expressive in many image generation cases: the element composition is rich and exciting, and the details are described delicately and accurately.
2. Understand Chinese better and be better at Chinese painting
Input performance is similar except for Chinese and English prompts In addition, AltDiffusion can also make up for the shortcomings of Chinese painting style in the Western world, using Chinese image and text pairs to continue to fine-tune the generation of Chinese characteristics, such as the Chinese painting style generation model, to produce a true "Chinese style".
AltDiffusion understands Chinese better. It can describe the meaning in the Chinese cultural context and understand the creator's intention instantly. For example, the description of "The Grand Scene of the Tang Dynasty" avoids going off-topic due to cultural misunderstandings.
In particular, concepts originating from Chinese culture can be understood and expressed more accurately to avoid confusion between "Japanese style" and "Chinese style". A ridiculous situation. For example, when inputting prompts corresponding to the Tang suit character style with Stable Diffusion in Chinese and English, the difference is clear at a glance:
In the generation of a specific style, It will natively use the Chinese cultural context as the identity subject for style creation. For example, for the prompt with "ancient architecture" below, ancient Chinese architecture will be generated by default. The creative style is more in line with the identity of Chinese creators.
AltDiffusion is based on Stable Diffusion, by replacing the CLIP in the original Stable Diffusion into AltCLIP, and further trained the model using Chinese and English image and text pairs. Thanks to AltCLIP's powerful language alignment capabilities, the generation effect of AltDiffusion is very close to Stable Diffusion in English, and it also reflects consistency in Chinese and English bilingual performance.
For example, after inputting the Chinese and English Prompts of "puppy in a hat" into AltDiffusion, the generated picture effects are basically aligned with extremely high consistency:
After adding the descriptor "Chinese boy" to the "boy" picture, based on the original image of the little boy, it was accurately adjusted to become a typical "Chinese" child, which was displayed in the language control generation Produce excellent language understanding capabilities and accurate expression results.
——Rich ecological tools and PromptsBook application can Excellent playability
Particularly worth mentioning is AltDiffusion’s ecological opening up capabilities:
All tools that support Stable Diffusion such as Stable Diffusion WebUI, DreamBooth, etc. can be applied to our The Chinese-English bilingual Diffusion model provides a wealth of choices for Chinese AI creation:
An excellent web tool for text and image generation and text and image editing; When we turn the night view of Peking University into Hogwarts (prompt: Hogwarts), the dreamy magical world can be presented in an instant;
A tool to debug the model through a small number of samples to generate a specific style; through this tool, a specific style can be generated using a small number of Chinese images on AltDiffusion, such as the "Havoc in Heaven" style.
Prompts are very important for generating models. Community users have accumulated rich generation effect cases through a large number of prompts attempts. . These valuable prompts experience are almost all applicable to AltDiffusion users!
In addition, you can also mix Chinese and English to match some magical styles and elements, or continue to explore Chinese Prompts suitable for AltDiffusion.
The open source AltDiffusion provides a basis for Chinese generation models. On this basis, you can use more Chinese in specific fields The data is used to fine-tune the model to facilitate expression by Chinese creators.
- Comprehensively enhance the three major cross-language capabilities, Chinese and English are aligned, Chinese is better, and the threshold is extremely low
Language understanding, image and text alignment, and cross-language ability are the three necessary abilities for cross-language research.
AltDiffusion's many professional-level capabilities are derived from AltCLIP's innovative tower-changing idea, which has been fully enhanced in these three major capabilities: the Chinese and English language alignment capabilities of the original CLIP have been greatly improved. Seamlessly connects to all models and ecological tools built on the original CLIP, such as Stable Diffusion; at the same time, it is endowed with powerful Chinese capabilities to achieve better results in Chinese on multiple data sets. (Please refer to the technical report for detailed explanation)
It is worth mentioning that this alignment method greatly reduces the threshold for training multi-language and multi-modal representation models. Compared with redoing Chinese Or English image and text pair pre-training, which only requires about 1% of the computing resources and image and text pair data.
Achieved the same effect as the English original version in the comprehensive CLIP benchmark
In some retrieval data For example, Flicker-30K has better performance than the original version
##Flicker-30K has better performance than the original CLIP
The zero-shot result on Chinese ImageNet is the best
The above is the detailed content of This AI master who understands Chinese, the mountains and the bright moon painted are so amazing! The Chinese-English bilingual AltDiffusion model has been open sourced. For more information, please follow other related articles on the PHP Chinese website!