Kohya brought massive improvements to FLUX LoRA (B GPUs) and DreamBooth / Fine-Tuning (B GPUs) training

Patricia Arquette
Release: 2024-11-21 08:57:09
Original
509 people have browsed it

You can download all configs and full instructions

https://www.patreon.com/posts/112099700 - Fine Tuning post

https://www.patreon.com/posts/110879657 - LoRA post

Kohya brought massive improvements to FLUX LoRA and DreamBooth / Fine-Tuning (min 6GB GPU) training.

Now as low as 4GB GPUs can train FLUX LoRA with decent quality and 24GB and below GPUs got a huge speed boost when doing Full DreamBooth / Fine-Tuning training

You need minimum 4GB GPU to do a FLUX LoRA training and minimum 6 GB GPU to do FLUX DreamBooth / Full Fine-Tuning training. It is just mind blowing.

You can download all configs and full instructions > https://www.patreon.com/posts/112099700

The above post also has 1-click installers and downloaders for Windows, RunPod and Massed Compute

The model downloader scripts also updated and downloading 30 GB models takes total 1 minute on Massed Compute

You can read the recent updates here : https://github.com/kohya-ss/sd-scripts/tree/sd3?tab=readme-ov-file#recent-updates

This is the Kohya GUI branch : https://github.com/bmaltais/kohya_ss/tree/sd3-flux.1

Key thing to reduce VRAM usage is using block swap

Kohya implemented the logic of OneTrainer to improve block swapping speed significantly and now it is supported for LoRAs as well

Now you can do FP16 training with LoRAs on 24 GB and below GPUs

Now you can train a FLUX LoRA on a 4 GB GPU - key is FP8, block swap and using certain layers training (remember single layer LoRA training)

It took me more than 1 day to test all newer configs, their VRAM demands, their relative step speeds and prepare the configs :)

Kohya brought massive improvements to FLUX LoRA (B GPUs) and DreamBooth / Fine-Tuning (B GPUs) training

Kohya brought massive improvements to FLUX LoRA (B GPUs) and DreamBooth / Fine-Tuning (B GPUs) training

The above is the detailed content of Kohya brought massive improvements to FLUX LoRA (B GPUs) and DreamBooth / Fine-Tuning (B GPUs) training. For more information, please follow other related articles on the PHP Chinese website!

source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template