Hello everyone, it’s been a while since I last checked in. I’ve been incredibly busy testing how AI and 3D can be practically applied in the field.
In just a short time, numerous AI-related platforms have emerged, and many more features have been added. Every moment is thrilling and enjoyable.
Today’s information might be more beneficial for specialists rather than the general public—specifically those who make a living from digital work.
It’s about making short cartoon animation by C4D with Octane Render and Midjourney, AI platform. However, I will explain everything in a very simple way so that anyone can give it a try.
What is 3D Software?
I’ll explain what 3D software is in a simple and friendly manner. Like many of you, including myself, you’ve probably watched movies, dramas, or cartoons and wondered how those fantastic visuals are created.
In simple terms, think of 3D software as tools that can create everything from the main animation and background elements to modeling objects and even adding impressive visual effects. These tasks often involve so many different features that it requires experts from various fields to work together to produce incredible results. Isn’t that fascinating?
There are also spaces where these experts from different fields gather to showcase their portfolios.
Through the link below, you can view the works of amazing artists from around the world, so please check it out.
What is the process for making 3D Animation?
To create even a short animation, there are three essential stages you must go through:
- Pre
- Product
- Post
It might look simple, but a good outcome requires the collaboration of many people at each stage.
(Of course, there are those incredibly skilled individuals who can do it all alone.)
Pre Stage
Think of this as the essential preliminary work needed before starting the actual video production. For example, this could involve the writers crafting the story, creating mood boards to set the visual tone and manner, making storyboards to visualize the story, or preparing for filming. It’s the stage just before the actual production process begins.
Product Stage
This is literally the production phase. You can think of it as the time when something is created from nothing. This encompasses all steps before the final visual work begins, including video shooting, 3D modeling, rigging, weighting, and animation. All these are collectively referred to as the Product stage.
Post Stage
This is the final stage, where the visuals of all the projects worked on earlier are enhanced to top quality. The compilation of all scenes created during the Product stage, adding effects, visual finishing, and color grading through D.I (Digital Intermediate) processes are all part of this last phase. This is how the dramas, movies, and animations we see on OTT platforms are produced.
The content above has been simplified to make it easier for non-experts to understand. Please understand that delving deeper would require explaining many more details. Anyway! The reason I’ve explained these stages is to highlight that using AI tools can make these processes more efficient.
Those who have extensively used AI tools might recognize that there are still many challenges in directly applying them in professional settings. However, on the flip side, it’s true that with the help of AI tools, a task that used to take 10 hours can potentially be reduced to just 1 hour.
Based on today’s discussion, I encourage you to explore and undertake various challenges.
1. Requirements
- 3D Software / Renderer
- 3D Model File (Animation)
- Midjourney
- After Effects
1) 3D Software
There are many types of 3D software available, so it’s difficult to say which one you must use. However, if you want to follow the trends a bit, I recommend choosing between Maya, Blender, or Cinema4D. I plan to use Cinema4D for my project, and I’ll be using Octane Render as the renderer.
2) 3D Model File (Animation)
Please download the 3D model file through the link below
–> Download link
3) Midjourney
For the AI tool, I’ll be using Midjourney today. If you’re not familiar with what Midjourney is, please read about it through the link below
–> What is Midjourney?
4) After Effects
Lastly, I will use Adobe’s graphic video production and editing tool, After Effects, for compositing. If you are an Adobe subscriber, please use it. If you are not subscribed to Adobe, you can also use DaVinci Resolve.
2. Create Animation with 3D & AI
1) Import 3D Animation File
Now, let’s open Cinema4D. Today, as shown in the picture above, we’ll only be using a few features, so don’t worry and just follow along!
We’re going to create a very simple animation using Cinema4D and Midjourney.
First, download the 3D model file I’ve provided and import it into Cinema4D.
As shown in the picture above, the 3D animation file should have loaded. It’s a model file with a simple animation already in place.
If you wish, you can add additional objects that complement it. If it’s your first time using this, you can just leave the animation file as is.
2) Octane Cartoon Material
If you are using Octane Render in Cinema4D, try creating a cartoon-style color material.
Just like in the image shown above, create the material and choose your desired colors.
For example, I created colors for human skin, clothing, the floor, sunglasses, and black hair. Make sure to customize the colors to fit the different parts of your model as needed.
Typically, you would unfold the UV map of the model file and proceed with your desired color work. However, to simplify the explanation for now, you can just select a polygon and drag and drop the desired color material value onto it.
To select all polygons belonging to an object, select one polygon and press the U+W shortcut.
3) Camera Setting
Now it’s time to set up the camera to create the angle you want. Let’s adjust the camera position and orientation to capture the best view of your scene.
To clearly understand the camera’s view, first press the camera button in the lower right corner to create a camera.
Then, activate the camera by clicking on the frame icon located at the far right of the camera icon in the object list on the right, turning it white.
Once it turns white, as shown in the image, you can assume that the view displayed on the left is exactly what is being seen through the camera.
4) Settings
The camera’s view size in Cinema4D can be adjusted in terms of pixel dimensions. Enter the settings to set the desired pixel dimensions and aspect ratio. After configuring these parameters, you can also reposition the camera as needed to achieve the best angle for your scene.
To extract the image as seen by the camera, we will use the multi-pass feature. Please proceed as shown in the image above.
Among the settings, you must definitely select Path Tracing and Alpha Channel to enhance the sharpness and clarity of the image. Our task today involves a simple compositing with an AI-generated image, so it is crucial to render the female model without any background. Please keep this in mind as it will ensure that the model can be seamlessly integrated into any background or scene you choose to composite later.
5) Lighting
Now, to see how the final render will look, we will add lights to the scene. Proper lighting is crucial to accurately represent the materials and details of your model in the render. Let’s proceed with positioning and adjusting the lights to achieve the desired visual effects.
As shown in the image, when you click on the live viewer of Octane Render, you will see in real-time what the scene looks like based on the settings you’ve just specified. However, since there are no lights in the scene right now, it appears black.
To resolve this, as illustrated in the image, you should create Toon Lights and position them where needed in the scene to illuminate your model properly.
Haha, doesn’t the slightly cartoonish walking woman look rather good?
Once all the settings are complete, as described, please press the Render button and wait for a moment.
The rendered files should now be available in the designated folder.
Once the render is complete, we will proceed to import the EXR files into After Effects for further processing and compositing.
6) Composite
Once you open After Effects, if you import the EXR files as a sequence, all the rendered EXR files will be brought in as if they are frames of a single video, laid out in sequence. Create a composition with the imported EXR files, then duplicate the layer for each multi-pass you extracted from Cinema4D to build up the layers. This method will allow you to individually control and adjust each aspect of the render for post-production effects.
After that, go to the Effect Control panel on the left and select each channel that you extracted as a multi-pass. Use the Blending Mode found in the middle of the layers to proceed with the compositing. By adjusting the Blending Modes, you can effectively merge the layers to enhance visual depth and realism or to achieve various creative effects in your composition.
Now that we’ve finished the basic setup in After Effects, the only thing left is the background, right?
Shall we head over to Midjourney now?
7) Creating Background from Midjourney
Go ahead and open Midjourney, as shown in the image, and type in your desired background elements as prompts to generate an image.
Since I’ve rendered the character with a cartoon-like color scheme, I want the background to have a slightly animated feel as well. Here is the prompt I’ll use to achieve the desired result for a background:
“Blue sky with clouds [low angle], [no birds], [no tree] in Japanese animation style –ar 16:9”
This prompt specifies the style and elements to include in the background, aiming to match the aesthetic of the foreground character and create a cohesive scene.
8) Render Video
Once you have downloaded the background image from Midjourney, bring it back into After Effects and position it underneath all the EXR layers that were previously empty. This will serve as your background.
Isn’t it incredible? To achieve a more professional and precise compositing, additional steps would typically be necessary.
However, I’ve simplified the process to demonstrate how you can incorporate an AI-generated background image into your animation project.
Please keep this as a reference for a basic introduction to integrating AI elements into your video productions.
As shown in the image, when you click “Add to Render Queue”, a render list will appear at the bottom, and a settings panel will pop up allowing you to change to the desired format. Adjust the settings according to your preferences, and then press the render button on the right to start rendering your project!
This will finalize your video, creating a completed file in the format you have specified.
3) Final Renderd Video
Ta-da! The animation of a woman walking against a beautiful sky background is complete! Isn’t it quite charming?
The introduction may have been lengthy, but the main point of today’s discussion was to show that background elements, aside from the main animations and objects, can be created using AI. Isn’t it fascinating that what once required extensive labor and time to create can now be done more easily?
Today’s content could be particularly useful for users who are already creating a lot of artwork using 3D software. I am currently working on an animation using various AI platforms and will share a glimpse of it once it’s ready.
Understanding all the features from 3D to AI might seem daunting, but platforms like Blender are becoming increasingly accessible with continuous tutorials being uploaded on YouTube. I encourage you to give it a try—it’s really fun!
Stay tuned for my next post where I’ll discuss new features of WebUI! Keep an eye out!