How to make Smooth AI Animation by ControlNet

Hello everyone, today I’m going to explain how to create smooth AI animations using ControlNet.
If you’ve used ControlNet through WebUI Stable Diffusion, today’s tutorial will be very helpful, so I prepared it for you.

3 6

ControlNet

ControlNet is a neural network structure that enables the manipulation of pre-trained large diffusion models using additional conditions. Some of the conditions you can use include edge maps, segmentation maps, key points, and more. To put it more simply, it’s an extension often used to extract necessary data from the data (videos, photos) we want to reference.

I’ve previously introduced how to create animations using ControlNet in a past post. It might be a good idea to check that out beforehand through the link below!

–> Go to Post(AI Animation by Img2Img)

1. Preparation

  • Installation of WebUi Stable Diffusion
  • Installation of ControlNet extension
  • Reference video png sequence prepared, Size: 512 X 768

Many of you reading this post might already have installed WebUI and the ControlNet extension. However, for those who are new, let’s go over it again.

1) Installation of WebUI Stable Diffusion/ Automatic1111

Follow the link below to a previous post for a quick installation guide. If you haven’t installed it yet, be sure to check it out!

–> Go to Post(How to Install Web UI Stable Diffusion)

2) Installation of ControlNet extension

Copy the URL I’m sharing below, open the Extensions tab in Stable Diffusion, paste it into the Install from URL field, and proceed with the installation. Once installed, click Apply and Restart UI to reboot Automatic1111.

Copy Url : https://github.com/Mikubill/sd-webui-controlnet.git

3) Preparation of Png or Jpg Sequence

The reference video you want to practice with AI should be shortened to about 3-5 seconds and extracted as a Png or Jpg sequence. For those using Adobe programs, set the video’s Output Type to png Sequence. If you don’t have specific software, you can use online programs to extract the images.

One tip: Most standard videos are composed of 30fps, meaning 30 images per second. We recommend setting it to 24fps, so 24 images appear per second, and extract them as a png sequence. It may not seem like a big difference, but for a 10-second basis, it will result in smoother animations without unnecessarily extracting 60 images, which is helpful for those worried about their computer overheating.

2. Creating a Character of Your Taste

Now, before using ControlNet, we’re going to roughly create the style of the character we want. First, go to the Img2Img tab, drag and drop the first image from the prepared extracted png Sequence, and apply it.
Then, create your rough image by reflecting prompts and settings for the face, hairstyle, clothing, background, etc.

Settings:

  • Base Model: realcartoonPixar_v5
  • VAE: None
  • Positive Prompt: (high resolution), (high quality), (highly detailed), 1girl, blond short hair, ninja spidersuit, slim body, looking at viewer,simple black background
  • Negative Prompt: Easynegative, (worst quality), watermark
  • Sampling Method: DPM++ SDE Karras
  • Sampling Steps: 30
  • Size: 512×768
  • CFG: 7
  • Seed: 2979862768

Once you have your desired result, move to the ControlNet tab, and I will tell you about the necessary settings.

3. ControlNet Settings

There are three main settings we will use in ControlNet.

  • Openpose
  • SoftEdge
  • Lineart

1) Openpose

The role of Openpose in Img2Img is to extract the skeleton of the reference image, and by using the dw_openpose_full processor, it’s possible to extract even the detailed facial features and finger bones. To create the smoothest possible video or sequence of images, extracting the skeleton accurately is crucial, so please reflect this setting!

image 74
Openpose Settings

2) Softedge

Softedge captures the edges of objects shown in the reference image. Activating this feature allows you to see clear lines around the main subject of the reference image. While Softedge helps prevent significant changes in the outline of consecutive images, there can be instances where certain elements with strong lines or shadows in the subject may not be affected regardless of how varied the prompt is. Since our goal is to create the smoothest animation possible, activate and set Softedge.

image 75
Softedge Settings

3) Lineart

Lineart is similar to Softedge, but it’s essential for videos with significant movement, like dancing. It captures the approximate lines of waving arms and legs to help ensure the images don’t change drastically. Make sure to set this feature!

image 77
Lineart Settings

Now that all the settings are complete, proceed with the options in each tab – Balanced, My prompt is important, ControlNet is more important – according to your preference until you get the desired result from the first frame.

4. Generate Images

Now we’re at the final step. Create Input and Output folders in a desired location on your computer, and place all the images extracted from the sequence you prepared earlier into the Input folder.

image 78
Input Folder

In the Img2Img tab, above the space where the reference image is uploaded, there should be a tab called “Batch.” Click on it and enter the paths for both the Input and Output folders. Then, scroll down to check all the settings one more time.

image 79
Path for Images

Shall we now take a brief look at the completed video?

Generated AI Anmation by A2SET
Generated Png files to video

It’s truly amazing!! Just by fine-tuning the settings I introduced in the previous post a bit more, you can see a much smoother result. Of course, it’s not yet perfectly smooth, but still!! I think it’s a pretty good outcome.

Next time, I’ll come back with more updated information to help you all create fantastic results too!

2 thoughts on “How to make Smooth AI Animation by ControlNet”

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart