Home > Technology peripherals > AI > body text

The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days

王林
Release: 2024-07-17 01:56:08
Original
267 people have browsed it
It is also a Tusheng video, but PaintsUndo has taken a different route.

ControlNet author Lvmin Zhang is back to work again! This time I aim at the field of painting.

The new project PaintsUndo has received 1.4k stars (still rising crazily) shortly after it was launched.

ControlNet作者又出爆款!一张图生成绘画全过程,两天狂揽1.4k Star

Project address: https://github.com/lllyasviel/Paints-UNDO

Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, starting from the line There are traces to follow from draft to finished product.

ControlNet作者又出爆款!一张图生成绘画全过程,两天狂揽1.4k Star

During the drawing process, the line changes are amazing. The final video result is very similar to the original image:

ControlNet作者又出爆款!一张图生成绘画全过程,两天狂揽1.4k Star

Let’s take a look at a complete painting process. PaintsUndo first uses simple lines to outline the main body of the character, then draws the background, applies color, and finally fine-tunes it to resemble the original image. ControlNet作者又出爆款!一张图生成绘画全过程,两天狂揽1.4k Star

PaintsUndo is not limited to a single image style. For different types of images, it will also generate corresponding painting process videos.

ControlNet作者又出爆款!一张图生成绘画全过程,两天狂揽1.4k Star

The corgi wearing a hood looks gently into the distance:

ControlNet作者又出爆款!一张图生成绘画全过程,两天狂揽1.4k Star

Users can also input a single image and output multiple videos:

ControlNet作者又出爆款!一张图生成绘画全过程,两天狂揽1.4k Star

However, PaintsUndo also has shortcomings, such as There are difficulties with complex compositions, and the author says the project is still being refined.

ControlNet作者又出爆款!一张图生成绘画全过程,两天狂揽1.4k Star

The reason why PaintsUndo is so powerful is that it is supported by a series of models that take an image as input and then output a drawing sequence of the image. The model reproduces a variety of human behaviors, including but not limited to sketching, inking, shading, shading, transforming, flipping left and right, color curve adjustments, changing the visibility of a layer, and even changing the overall idea during the drawing process.

The local deployment process is very simple and can be completed with a few lines of code:

git clone https://github.com/lllyasviel/Paints-UNDO.gitcd Paints-UNDOconda create -n paints_undo python=3.10conda activate paints_undopip install xformerspip install -r requirements.txtpython gradio_app.py
Copy after login

Model introduction

The project author used 24GB VRAM on Nvidia 4090 and 3090TI for inference testing. The authors estimate that with extreme optimizations (including weight offloading and attention slicing) the theoretical minimum VRAM requirement is around 10-12.5 GB. PaintsUndo expects to process an image in about 5 to 10 minutes, depending on the settings, typically resulting in a 25-second video at a resolution of 320x512, 512x320, 384x448, or 448x384.

Currently, the project has released two models: single-frame model paints_undo_single_frame and multi-frame model paints_undo_multi_frame.

The single-frame model uses the modified architecture of SD1.5, taking an image and an operation step as input and outputting an image. Assuming that a piece of art usually requires 1000 manual operations to create (for example, one stroke is one operation), then the operation step size is an integer between 0-999. The number 0 is the final finished artwork and the number 999 is the first stroke painted on a pure white canvas.

The multi-frame model is based on the VideoCrafter series of models, but does not use the original Crafter's lvdm, and all training/inference code is implemented completely from scratch. The project authors made many modifications to the topology of the neural network, and after extensive training, the neural network behaves very differently from the original Crafter.

The overall architecture of the multi-frame model is similar to Crafter, including 5 components: 3D-UNet, VAE, CLIP, CLIP-Vision, and Image Projection.

The multi-frame model takes two images as input and outputs 16 intermediate frames between the two input images. Multi-frame models have more consistent results than single-frame models, but are also much slower, less "creative" and limited to 16 frames.

PaintsUndo uses single-frame and multi-frame models together by default. First, a single-frame model will be used to infer about 5-7 times to obtain 5-7 "key frames", and then a multi-frame model will be used to "interpolate" these key frames, and finally a relatively long video will be generated.

Reference link: https://lllyasviel.github.io/pages/paints_undo/

The above is the detailed content of The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!