


AltDiffusion-m18, a versatile tool for generating multilingual texts and images
Currently, the selection of non-English text and image generation models is limited, and users often have to translate the prompt into English before entering the model. This will not only cause additional operational burden, but also language and cultural errors in the translation process will affect the accuracy of the generated images.
Zhiyuan Research Institute’s FlagAI team pioneered an efficient training method, using a multi-language pre-training model combined with Stable Diffusion to train a multi-language text and image generation model - AltDiffusion-m18, supporting 18 types Language text-image generation.
Including Chinese, English, Japanese, Thai, Korean, Hindi, Ukrainian, Arabic, Turkish, Vietnamese, Polish, Dutch, Portuguese, Italian, Spanish, German, French, Russian.
Huggingface: https://huggingface.co/BAAI/AltDiffusion-m18
GitHub: https://github.com/FlagAI-Open/FlagAI/blob/master/examples/AltDiffusion -m18
AltDiffusion-m18 achieved Stable Diffusion 95~99% effect in the objective evaluation of FID, IS, CLIP score in English, reached the optimal level in Chinese and Japanese, and filled in the remaining 15 categories. The gap in the language text and picture generation model has greatly satisfied the industry's strong demand for multi-language text and picture generation. Special thanks go to the Stable Diffusion Research Team for providing advice on this work.
In addition, AltDiffusion-m18 related innovative technology report "AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities" has been accepted by Findings of ACL 2023.
Technical Highlights
1 New AltCLIP, efficient and low-cost construction of multi-language T2I model
AltDiffusion released last year -m9, based on Stable Diffusion v1.4, the Zhiyuan team innovatively replaced the language tower with the multi-language tower AltCLIP, and used multi-language data in nine languages for fine-tuning, extending the original Stable Diffusion that only supports English to support 9 different languages.
AltCLIP: https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP-m18
And AltDiffusion-m18 is based on Stable Diffusion v2.1 training. The new language tower of Stable Diffusion v2.1 is the inverted second layer of OpenCLIP. Therefore, the new AltCLIP uses the inverted second layer of OpenCLIP as the distillation target to retrain, and based on m9, it will only use the CrossAttention layer K and V matrices in Unet. Fine-tuning is expanded into a two-stage training method, as shown in the figure below:
- First stage: Earlier during the experiment of m9, it was discovered that fine-tuning the K and V matrices The main thing to learn is the conceptual alignment of text and pictures, so the first stage of m18 training continues to use the data of 18 languages to fine-tune the K and V matrices. In addition, experiments have proven that reducing the resolution of an image from 512*512 to 256*256 does not lose the semantic information of the image. Therefore, in the first stage of learning text-image concept alignment, the resolution of 256*256 is used for training, which speeds up the training.
- The second stage: In order to further improve the quality of the generated images, use the resolution of 512*512 to train the full parameters of Unet in the data of 18 languages. In addition, 10% of the text is discarded for unconditional training to serve classifier-free guidance inference.
- In addition, a classifier-free guided training technique is adopted to further improve the generation quality.
The latest evaluation results show that AltCLIP-m18 surpasses CLIP and reaches the optimal level in Chinese and English zero-shot (zero sample) retrieval tasks⬇️
On multi-language image classification benchmarks, AltCLIP-m9 (early version, supports 9 languages) and AltCLIP-m18 reach the optimal level ⬇️
Similarly, thanks to AltCLIP With the innovative idea of changing towers, AltDiffusion-m18 can also be seamlessly connected to all Stable Diffusion models and ecological tools built on the original CLIP. All tools that support Stable Diffusion such as Stable Diffusion WebUI, DreamBooth, etc. can be applied to AltDiffusion-m18. Painless to get started and great playability!
2 Multi-language generation effects are aligned, with superior performance and accurate details
With the blessing of the new AltCLIP, AltDiffusion-m18 has achieved 95~99% of the original Stable Diffusion effect in the English FID, IS, CLIP score evaluation, and has achieved the most advanced performance in 17 languages including Chinese and Japanese. The performance of AltDiffusion-m18 is shown in the following table:
The above is the detailed content of AltDiffusion-m18, a versatile tool for generating multilingual texts and images. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Face detection and recognition technology is already a relatively mature and widely used technology. Currently, the most widely used Internet application language is JS. Implementing face detection and recognition on the Web front-end has advantages and disadvantages compared to back-end face recognition. Advantages include reducing network interaction and real-time recognition, which greatly shortens user waiting time and improves user experience; disadvantages include: being limited by model size, the accuracy is also limited. How to use js to implement face detection on the web? In order to implement face recognition on the Web, you need to be familiar with related programming languages and technologies, such as JavaScript, HTML, CSS, WebRTC, etc. At the same time, you also need to master relevant computer vision and artificial intelligence technologies. It is worth noting that due to the design of the Web side

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

New SOTA for multimodal document understanding capabilities! Alibaba's mPLUG team released the latest open source work mPLUG-DocOwl1.5, which proposed a series of solutions to address the four major challenges of high-resolution image text recognition, general document structure understanding, instruction following, and introduction of external knowledge. Without further ado, let’s look at the effects first. One-click recognition and conversion of charts with complex structures into Markdown format: Charts of different styles are available: More detailed text recognition and positioning can also be easily handled: Detailed explanations of document understanding can also be given: You know, "Document Understanding" is currently An important scenario for the implementation of large language models. There are many products on the market to assist document reading. Some of them mainly use OCR systems for text recognition and cooperate with LLM for text processing.
