Creating a Short AI Animation with Stable Diffusion

Hello, how are you doing today?
Today, I’m excited to guide you through the process of creating a very short ai animation using Automatic1111, one of the hottest AI services right now, Stable Diffusion.

AI Animation Banner

What is Video?

First off, let’s clarify a common misconception: video and photography are different. A brief explanation is in order: when we take videos using a phone or a professional camera, we set something called FPS, which stands for Frames Per Second. It refers to how many frames you are capturing every second.

What is a frame?

In simple terms, a frame is like a single photograph. Setting your camera to 30 FPS means you’re capturing 30 individual images in one second. The standard FPS used is 30. However, for high-speed cameras or in animation, the frame rate can vary. For example, Japanese animation often utilizes 14 or 16 FPS depending on the scene, while Disney animations might use the full 30 FPS.
The higher the frame rate, the smoother the video appears. So, what about high-speed cameras?

High-speed cameras can capture as little as 1,000 frames per second or go up to tens of thousands of frames. Since the standard video we watch is at 30 FPS, by using just 30 of the multitude of frames captured by a high-speed camera, we see only a fraction of the images captured in that second.

Understanding these basics of video will be incredibly helpful as I explain how to create AI-generated videos using Stable Diffusion (SD)!

Isn’t it fascinating already?

I’m assuming that you already have Stable Diffusion installed on your computer. For those who haven’t tried it yet, I’ll link to my previous post below.

1. Setup Preparation

The task at hand is not to AI-fy an existing video but to generate a video entirely with AI, so a few settings provided by Stable Diffusion need to be ready:

  • Main Program: WebUI Stable Diffusion / Automatic1111
  • Extension Program: sd-webui-controlnet-animatediff
  • Extension Program: sd-webui-animatediff-for-ControlNet
  • Dedicated Model for Extension Program: ControlNet model
  • Dedicated Model for Extension Program: animatediff model

If you have these 5 items ready, you can start testing immediately. But who are we?
Aside from the installation of WebUI Stable Diffusion, I’ll guide you through each of the rest. For the installation method of WebUI Stable Diffusion, please read and follow the instructions via the link below.

–> Go to Post (How to Install SD)

1) Installing sd-webui-controlnet-animatediff

To install, go to the Extensions tab on the main screen of Automatic1111 and copy the code from the link below, then paste it there.

–> Github Link Or Copy this code and paste in Extension’s Url Tab https://github.com/DavideAlidosi/sd-webui-controlnet-animatediff.git

Automatic1111-Extensions-Install from URL
Automatic1111-Extensions-Install from URL

2) Installing sd-webui-animatediff-for-ControlNet

Similarly, enter the Extensions tab on the main screen of Automatic1111, copy the code from the link below, and paste it there.

–> Github Link Or Copy this code and paste in Extension’s Url Tab https://github.com/DavideAlidosi/sd-webui-animatediff-for-ControlNet.git

Automatic1111-Extensions-Install from URL
Automatic1111-Extensions-Install from URL

Once all extensions are installed, click the Apply and restart UI button and follow the steps below.

3) Applying the animatediff model

Enter the link below, download the “control_v11f1e_sd15_tile” model, and place it in the Extensions / sd-webui-controlnet-animatediff / Models folder.

–> Go to download link

3 3

4) Applying the ControlNet model

Enter the link below, download the “mm_sd_v15_v2.ckpt” model, and place it in the Extensions / sd-webui-animatediff-for-ControlNet / Model folder.

–> Go to download link

2 5

2. Composing Prompts and Creating Images

Now that all the settings and preparation are complete, let’s get started. In the Txt2Img tab, I will set the icbinpICantBelieveIts_seco.safetensors model as my base model and describe the image I want to create in the Positive Prompt.

  • Base Model : icbinpICantBelieveIts_seco.safetensors
  • Positive Prompt : 8k, high-resolution, upper-body, photograph of a girl wearing blue dress in beach suitable for a popular Instagram post.

The composition of the prompt can differ from person to person, but for me, I plan to write using a combination of descriptive words and a simple one-sentence scenario. The batch size will be set to 1, image size to 512×512, and the seed value will remain at -1. Let’s generate and see what we get.

Generated Image
Generated Image

3. Animatediff Settings and GIF Creation

Next, we’ll unfold the Animatediff tab on the bottom left side and make a few settings. If you’ve placed the model you downloaded earlier in the correct path, the AnimateDiff Motion module should automatically display the mm_sd_v15_v2 model. Activate the Enable button right below it, set the number of Frames to 10, and Frames Per Second to 20.

Animatediff Settings
Animatediff Settings

As mentioned before, an FPS value of 10 means that there will be 10 images per second, and setting the total number of frames to 20 means that we will be creating a video that lasts for 2 seconds with 20 images.

4. ControlNet Settings

Now, let’s drag the image we extracted above into the Single Image window of ControlNet on the bottom left. ControlNet is an extension that helps a lot when creating new images based on the image set here. Similarly, activate the Enable and select Tile/Blur; this will automatically select the settings below and the ControlNet model you downloaded earlier should appear automatically.

ControlNet Settings
ControlNet Settings

Now that all the preparations are done, shall we try pressing the image generation button?

Creating a total of 10 images will take some time, but once all of these images are created, they will be saved in the originally designated Save image file. Additionally, the GIF file that was created simultaneously will also be saved, so you can check the video.

AI Animation Output GIF
Output GIF

Wow..! That’s really incredible!
It may not be a result that surpasses imagination, but what we need to pay attention to here is that AI has generated it on its own.
Many artists are now pioneering a new type of content market by combining AI with music videos or existing polished videos. They all have one thing in common: they are working based on videos that are already created or well-finished.

However, the method I showed you today seems to really be a realm of creation. Thanks to the development of AI technology, every day is enjoyable.The output isn’t perfect, but with a few more steps, it’s said that quite a smooth animation can be created.

For today, we’ll leave it at that, and next time I’ll guide you through a more detailed method.You’ve worked hard today, and next time I’ll come back with more new information~!

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart