Hello everyone~! How is your day going?
Today, I want to talk about using Automatic1111’s ControlNet for dynamic posing.
ControlNet
ControlNet is a neural network structure that allows controlling pre-trained large diffusion models using additional conditions.
Some of the conditions you can use include edge maps, segmentation maps, key points, and more.

1. How to use ControlNet Pose
I imagine that both new and existing users who are creating artwork using Automatic1111 might often find themselves pondering over the camera angle or the pose. Therefore, today I plan to briefly introduce you to the most fundamental feature in ControlNet – Pose. As an added bonus, I will also provide pose data that can help you quickly grasp this posing practice. So, make sure to read till the end and download the files to practice!
1) Writing Prompts
Firstly, what you need to do is create a character and mood similar to the desired outcome, using an appropriate Base Model and prompts. Those of you who have been following through my previous posts should be able to do this quickly.
But I understand there might be beginners too, so let me explain briefly.
After launching the updated UI, Automatic1111, of WebUI Stable Diffusion, select the base model on the top left, write your desired Positive and Negative Prompts, and then press the Generate button!
–> Go to Post (How to install Stable Diffusion)
For those who find it challenging to create on their own, I will show you the prompts and settings I used. You can follow along with the image and settings below!

Settings
- Base Model : toonyou_beta6.safetensors
- Vae : vae-ft-mse-840000-ema-pruned.safetensors
- Positive Prompts : (masterpiece, best quality), 1girl, solo, black wavy hair, oversize hoodie, black jeans, white socks, sitting, room filled in toys, reading a comicbook, dappled sunlight, (low angle)
- Negative Prompt : (worst quality, low quality, letterboxed)
- Sampling : DPM++ SDE Karras
- Steps : 20
- Hires.fix / Upscaler : R-ESRGAN 4x+ Anime6B
- Hires.fix / Upscale by : 2
- Hires.fix / Desnoising : 0.7
- Size : 512 X 512
- Seed : -1
2) Settings for ControlNet
As shown in the image, I tried creating a character in the style I wanted using prompts. Of course, depending on the arrangement of the prompts, the emphasis on certain prompts, and the Seed value, various outputs can be generated. In my case, I’m going to maintain the settings as above and now try editing the pose. Aren’t you already curious to see how it turns out?
Now, let’s unfold the ControlNet tab mentioned in a previous post! If ControlNet isn’t installed on your computer, simply copy the link below and paste it into the URL section under Extensions in Web UI Stable Diffusion / Automatic1111, and then press Install. Once the installation is complete, don’t forget to restart the UI.
Copy this Url : https://github.com/Mikubill/sd-webui-controlnet.git
The Pose setting in ControlNet is quite straightforward. First, download the JPG file of the 3D-rendered sitting pose I’m sharing with you. Then, proceed with the settings as shown in the image below.

ControlNet Detailed Settings
When you unfold the ControlNet tab, you’ll see a space to drag or load an image, along with checkboxes for Enable, Low VRAM, Pixel Perfect, and Allow Preview. First, bring in the downloaded sitting pose image into the image window, and then activate the Enable button located just below it.
Next, we’ll select a Type. Among the many options, we’ll check the OpenPose box. For the Preprocessor, choose openpose_full, and for the Model, select control_v11p_sd15_openpose.
If you don’t have the Model, it can be downloaded from the link provided below.
–> Download Link(ControlNet_model)
Now, leave the rest of the settings as they are and choose “ControlNet is more important” under Control Mode. Here, Balanced means it will appropriately reflect both the prompts you’ve written and the image from ControlNet. My Prompt is more important implies that it will follow the values in your prompt more than the image reflected in ControlNet, whereas ControlNet is more important means it will adhere more to the pose you’ve dragged into the image window.
Since we are practicing poses, we select ControlNet is more important, and then all we have to do is hit the Generate button again. Aren’t you excited to see what the outcome will be?

I feel this every time, but it really makes me so happy, haha.
While it’s not exactly the more three-dimensional feel I implemented in ControlNet, I can still sense that it has created a very similar vibe.
So, how about we try again with a slightly more dynamic staging, changing the prompts and ControlNet image this time?
2. Maximizing the Use of ControlNet Openpose
You might have noticed too, for a more effective use of the Openpose values in ControlNet, having clear photographic data can make a big difference. This time, I’m going to try using a running pose, where all arms and legs are clearly separated from the background and body, as the basis for another attempt.
Feel free to download the same image and follow along with me! –> Download Running Pose Image
Settings
- Base Model : toonyou_beta6.safetensors
- Vae : vae-ft-mse-840000-ema-pruned.safetensors
- Positive Prompts : (masterpiece, best quality), 1girl, solo, black wavy hair, oversize hoodie, black jeans, white socks, running shoes, running, forrest, dappled sunlight
- Negative Prompt : (worst quality, low quality, letterboxed)
- Sampling : DPM++ SDE Karras
- Steps : 20
- Hires.fix / Upscaler : R-ESRGAN 4x+ Anime6B
- Hires.fix / Upscale by : 2
- Hires.fix / Desnoising : 0.7
- Size : 512 X 512
- Seed : -1

Following the settings as before, this time to create a running pose, I have switched from ‘Sitting’ to ‘Running’ and adjusted a few prompts. After uploading one of the downloaded running pose images to ControlNet, you can then proceed with the same settings as before.
Now, shall we try generating once again?

Although there are slight awkwardnesses, like the shape of the fingers, it’s still incredible, isn’t it?
Can you see how much better it reflects compared to what I showed you earlier? Of course, it’s not just about using images like these; I also recommend uploading real-life images to ControlNet for more detailed results.
Today, we briefly explored ControlNet’s Openpose feature. How did you find it?
Next time, I’ll introduce more detailed and user-friendly features of ControlNet, so stay tuned!