What are LoRA models and How to use

Hello everyone, today we will explain about LoRA, one of the various models used in Stable Diffusion!

1. What is LoRA?

The word LoRA itself is quite unfamiliar, isn’t it?

LoRA stands for (Low-Rank Adaptation). It allows you to use low-rank adaptation technology to quickly fine-tune diffusion models. To put it in simple terms, the LoRA training model makes it easier to train Stable Diffusion on different concepts, such as characters or a specific style. These trained models then can be exported and used by others in their own generations.

Stable Diffusion models have been gaining popularity in the field of machine learning for their ability to generate high-quality images and text. However, one major drawback of these models is their large file size, making it difficult for users to maintain a collection on their personal computers. This is where LoRA comes in as a training technique to fine-tune Stable Diffusion models while maintaining manageable file sizes.

–> Go to Post about Stable Diffusion

LoRA models are small Stable Diffusion models that apply smaller changes to standard checkpoint models, resulting in a reduced file size of 2-500 MBs, much smaller than checkpoint files. LoRA offers a good trade-off between file size and training power, making them an attractive solution for users who have an extensive collection of models.

1

Thumbnails from Civitai

2. LoRA Types

LoRA is like adding extra tools to our basic Stable Diffusion toolbox. With the right LoRA data, you can do lots of cool things:

  • Realistic LoRA
  • Style LoRA
  • Character LoRA
  • Clothing LoRA
  • Object LoRA

These lists help you get the result you want. Let’s dive into some examples for each type, okay?

Realistic LoRA

Here’s an example of LoRA that’s been taught using real people’s faces and other real-world photos. It’s called “Realistic” because it’s trained to feel like actual people and animals. So, many folks find it super helpful for making virtual characters based on their own faces or even entirely new ones.

2 1

Style LoRA

Style LoRA shares many similarities with character LoRA, but instead of training on a specific character or object, it focuses on an artistic style. This type of model is usually trained on art by a specific artist, giving you access to their signature style in your own work. Style LoRA can be used for anything from stylizing reference images to creating original artwork in that same style.

3

Character LoRA

A model trained on a specific character, such as a cartoon or video game character. Character LoRA is able to accurately recreate the look and feel of a character, as well as any key features associated with them. This is the most common type of LoRA, as generating characters without this training data is often tricky and inconsistent.

4

Thumbnails from Civitai(Shanks / Doflamingo / Jinbe)

Clothing LoRA

Another useful model is clothing LoRA. As you’d expect, this type of LoRA model is designed to change the clothing and accessories on a person. With it, you can quickly and easily give any character new clothes, be they modern or historical in style.

5

Thumbnails from Civitai(Korean Traditional Warrior Dress / SpaceSuit / Latex Under Clothes)

Obeject LoRA

Last but not least, we have object LoRAs. This is a broad category of LoRA models that are used to generate objects such as furniture, plants or even vehicles. Of course the type of items you can create with these models depends on the specific model you’re using and the prompt you provide.

6

Thumbnails from Civitai(Arthemy Objects)

So far, I’ve shown you different types of LoRA data and how they can be super useful for AI image editing. Cool, right? With this data, anyone can create the images they want and practice different techniques.

Plus, if you have the images you want to train with, you can use them as LoRA data. I’ll explain how to do that in another post.

Now, let’s dive into how to use LoRA. Stay tuned and follow along!

3. How to use LoRA

First, to use LoRA, you’ll need to install WebuiStable Diffusion. Once you’ve got that program and follow a few steps, you can start using it right away. Just follow the steps below!

Step 1 : Install Web UI Stable Diffusion

–> Go to Post “How to install Web UI Stable Diffusion”

Step 2 : Install the Extension
  • First, launch the Web UI Stable Diffusion / Automatic1111.
  • Open the “Extensions” tab, and click on “Install from URL” from the available options.
  • Paste the following link into the “URL for extension’s git repository” input field, and then press the “Install” button: https://github.com/kohya-ss/sd-webui-additional-networks.git
image 8
  • Switch to the “Installed” tab, and click on the “Apply and restart UI” button. Now, wait for the Automatic1111 web UI to restart.
image 9

Then you need to install your actual LoRA models to the correct folder as well. To do that, grab the downloaded LoRA file and place it in your “stable-diffusion-webui/models/Lora” folder.

7
Folder for LoRA Models
Step 3 : Download the LoRA data you’d like to practice with or rework

Upon entering the mentioned site, you can download a variety of model data. Simply choose the model corresponding to LoRa, download it, and place it in the exact path as previously mentioned.

image 12

Civitai Homepage

Step 4 : Test processing in Stable Diffusion using the extension.

All preparations are now complete. We’ve downloaded the LoRA model and finished the Web UI Stable Diffusion extension updates. Shall we proceed with the tasks at hand?

First, I’ll show you the base model, LoRA data, and various configurations I’ll be testing with.

  • LoRA Model : [LoRA] Opal / 貴蛋白石 Concept (With dropout & noise version) –>Download
Without LoRA
  • Positive Prompt : 1girl, solo, upper body
  • Negative Prompt : (worst quality, low quality, normal quality:1.4), (inaccurate limb:1.2), white background, nude, simple background
  • Sampling : Method : DPM++2S a Karras
  • Sampling Steps : 20
  • Upscaler : 4x-UltraSharp Denoising strength: 0.2
  • Size : 512 X 512
  • Seed : -1
With LoRA
  • Positive Prompt : 1girl, solo, upper body, <lora:opal_diamond-noise:1>
  • Negative Prompt : (worst quality, low quality, normal quality:1.4), (inaccurate limb:1.2), white background, nude, simple background
  • Sampling : Method : DPM++2S a Karras
  • Sampling Steps : 20
  • Upscaler : 4x-UltraSharp Denoising strength: 0.2
  • Size : 512 X 512
  • Seed : -1

I tried pressing the “Generate” button, maintaining the settings as mentioned above and leaving all other configurations untouched.

For reference, all these models can be freely downloaded from the Civitai site. I will demonstrate the clear differences in outcomes before and after using LoRA.

8

It’s incredible! Do you see this stark difference? Without the application of LoRA, the image was solely based on the GhostMix model. However, with prompts that included the Opal Diamond-like wing decorations from LoRA, the unique style that LoRA possesses is perfectly represented.

Those using LoRA for the first time will likely find joy in such a simple transformation. If you want to incorporate LoRA while retaining some of the original feel of the Base model, you can adjust its intensity by changing the numbers as shown below.

<lora:opal_diamond-noise:1> = 100% of LoRA

<lora:opal_diamond-noise:0.9,>, <lora:opal_diamond-noise:0.5,>, <lora:opal_diamond-noise:0.1,> = Less than 100%

Adjusting these values will reflect less of LoRA’s details. Therefore, if you want a proper mix of the Base Model and LoRA, or a blend between different LoRAs, you can adjust these values and test them out.

Though the installation process might take some time, once you get through it, using LoRA is actually quite simple, isn’t it?

We will continue to share valuable information so that many people can produce quality results through AI services!

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart