Hello everyone, today I’m going to briefly introduce LCM LoRA to you.
LDM LoRA & LCM LoRA
If you’ve used Stable Diffusion, chances are you’ve used LoRA at least once. LoRA, the latent diffusion model, has been instrumental in generating realistic and varied images through Txt2Img or Img2Img.
However, LDMs, or the LoRA data you’ve been using, required numerous computational steps and consumed a lot of GPU memory, making the process time-consuming. For more information on LoRA, you can check the link below.
–> Go to Post (What is LoRA and How to Use?)

This is where LCM-LoRA becomes crucial. LCM-LoRA is a universal stable diffusion acceleration module that can increase the speed of LDMs by up to 10 times, while maintaining or even improving image quality. Today, I want to introduce and guide you on how to use this innovative AI technology called LCM LoRA.
1. What is LCM LoRA?
LCM-LoRA stands for Latent Consistency Model — Latent Residual Adapters. It’s a technique that can compress LDMs into smaller, faster models without sacrificing image quality.
The core idea of LCM-LoRA is to train a small number of adapters, known as LoRA layers, instead of the full model. These LoRA layers are inserted between the convolutional blocks of the LDM and are trained to mimic the output of the original model.
The resulting model, LCM, can generate images with fewer diffusion steps and less memory consumption. But more remarkably, LCM-LoRA can be directly plugged into any fine-tuned version of the LDM without any additional training.
This means LCM-LoRA can be used as a universal stable-diffusion acceleration module for any image generation task based on LDMs.
2. How to use LCM LoRA?
Though the theory might sound complex, let’s try it out and compare the differences. What I’m about to explain is just an example, so please use it as a practice. If you need detailed instructions on using LCM-LoRA for image generation, you can find guidance through the GitHub repository.
Firstly, we need to download the LCM-LoRA file. Please download LCM LoRA using the link below. We will be testing two versions today, SD1.5 and SDXL.
After downloading, move the file to the Models/LoRA folder and run Web UI Stable Diffusion.
(Note that the file names are identical after downloading, so you’ll need to rename them to SD15 or SDXL yourself!)
It might feel incomplete and uncertain without any specific installation, but wait until you see the results from the test – you’ll be amazed.
1) Generate Image
Now, in the launched Stable Diffusion, try creating the image you desire through prompt writing!
If it’s difficult to do it alone, feel free to follow the settings I’ve provided.
Settings
- Base Model : dreamshaper_8.safetensors
- VAE : Automatic1111
- Postive Prompt :
- (high resolution), (high quality), (highly detailed), 1girl, blond short hair, slim body, jacket, looking at viewer, simple black background
- Negative Prompt : Easynegative, (worst quality), watermark, nude
- Sampling Method : Euler a
- Sampling Steps : 30
- Size : 512×512
- CFG : 7
- Seed : -1

2) Apply LCM LoRA
If your desired image turned out like the one above, now open the LoRA tab and apply the SD15 LCM_LoRA you downloaded earlier in the Positive Prompt. The most crucial part here involves adjusting the Sampling Steps and CFG Scale values. Unlike the traditional LoRA, when using LCM, you need to significantly lower these settings.
- Sampling Steps: 4 ~ 8
- CFG Scale: 1 ~ 2
I will also change the Sampling Steps and CFG values of the generated image and proceed again.

Ta-da! The point to notice here is the significant difference in time.
- Normal Generating time: 41.2 sec
- LCM LoRA Generating time: 9 sec
For those who have always had to wait a long time for generating images, this could be a tremendous relief and joy. The fact that there’s more than a four-fold difference in computational speed is remarkable. And even more impressive is that the quality is maintained.
To clearly demonstrate the difference in time, I’ll show you a comparison shot.
This time, keeping all settings identical, I applied the 4X Ultra Upscaler and generated the image once again.

Wow, there’s an incredible 2-minute difference in processing time! This is a game-changer, especially for those with less powerful GPUs who find it challenging to handle intensive tasks. Now, I’ll show you a side-by-side comparison using SDXL. Remember, for using SDXL, you’ll need to adjust the base model and VAE settings.

While there may be some differences in appearance, the significant reduction in processing time with LCM LoRA is truly impressive.
I encourage you all to freely create more outcomes using LCM LoRA.
We’re always cheering for you. Stay tuned for more new and exciting content next time!