Ludivine Moro provided a detailed breakdown of the Alecia project, discussing modeling and texturing the character's head, outfit, hair, and accessories using Blender, Maya, ZBrush, Substance 3D Painter, and Unreal Engine.
Introduction
Hi everyone! Please allow me to introduce myself. I'm Ludivine Moro, a young 3D Character Artist who completed her coursework in 2024, a year ago. I'm a young artist who has spent the last five years exploring the world of video games, but I've always loved them.
Art, in general, appeals to me. I started by hastily sketching using Photoshop and the then-current paint tool on a graphics tablet. However, my passion for video games developed later when I discovered ZBrush, which I began learning on my own before attending school. A passion followed the emergence of a full string of discoveries.
I learned a lot of new software and technical skills during my education, but my main strength is efficiently creating characters. I studied human anatomy or the science of the fold, among other things, since I was two years old and specialized in 3D character art at Artside Game School. A 3D Character Artist who worked in the game industry gave me the opportunity to learn from them, and I will always be grateful for all of the knowledge they shared.
I'm currently looking to start an adventure in the gaming industry and work with a different team and studio to develop a single project. Concurrently, new project staff was developed in order to keep improving; skill development is endless.
Alecia Project
My main goal for this project is to have fun. This project's theme is FUN. Following my education, I try to put my knowledge to use in order to pick up new talents. My approach to this project is a little disorganized; I basically pay attention to my desires and emotions. Adapting during all of the project's difficult meetings was a true challenge.
To push myself to produce realistic elements, I began with a simple eye. This urge was sparked by an Instagram post starring Jared Chavez. I didn't look for an in-game object at first. I don't have any polygon-related warnings. I made the decision to make the character in-game ready as the work goes forward.
Initially, I aim to develop a female figure that evokes an African background while wearing a modern dress. I took a different path after the project. I only discuss my work on the render final in this interview today. I won't show you the shader or the initial Maya render, for example.
This character's 2D concept, which I created, explains the changes in artistic direction.
References
I start by creating a board of references in PureRef, which I will further enhance based on the project and the paths I choose. I look for a model or reference, and Alecia Morais is the one I decide on. I think it's a nice experience to work on the medium-dark skin's texturing.
The reference is a crucial component because it is at this point that I decide the course I wish to follow. I browse through a lot of images on Pinterest to get ideas and decide on a starting point for this project. I also look at other artists' work on Artstation; it's a good goal for me.
The Beginning: Eye
I use Digital Emily's eyes as a base for the eye; it's a free resource. First, it's not the exercise for this project, but second, it's ideal as it's the first time I've tried to construct a realistic eye, and the well-resourced render in Maya helps me comprehend the technical use.
I used ZBrush to sculpt the detailed iris of the eye. I try to replicate the effect of multiple layers for the sculpt, and I use level eight's radial symmetry for my mesh to save time. I stop the symmetry after creating the base, then I make a variation of my sculpt using both the regular brush and the inflate brush.
For added depth and improved ambient occlusion in the bake, I like to make a second mesh in superposition of the eyes. I used the eye with the hole to simulate depth in the baking process and to get a more realistic effect.
I used Marmoset Toolbag to bake the interior of the eye. This software is my preferred option for all baked assets. Even though I could have used Substance 3D Painter only for this eye, I gave the best control possible. The center does not have a hole, in contrast to the high.
I utilize a traditional technique with Substance 3D Painter for the texturing. I made several color layers for the texturing in an attempt to get a realistic yet unique blue eye. For the two meshes I made, I was also able to employ color ID, which was useful when texturing to add depth and detail to the eye. For further contrast, I then employ a curvature generator and a dirt generator.
Since I'm looking for realism, I choose to render in Maya, which I haven't used in a few years. I found this to be a useful method of reviewing Maya's lighting and shaders.
Face
I try to get better at modeling the human face after I've finished rendering the eye. It's challenging, but I believe that's why I like it. So why not take advantage of the eye I just made? I begin with an initial mesh for the head that I collected while I was a student. To make things easier for myself, I began with a nice model with a good topology. By eliminating stars in the topology, for example, I save time when applying the details.
I start with symmetry, and I utilize the spotlight tool to show my model's reference image in transparency in order to assist me in determining the ideal proportion. I made a camera that records the same facial position in front of the camera so I could compare it to my highlight reference. To avoid starting in detail too soon before determining the ideal proportion, it's crucial to often zoom out and rotate the model.
I projected the skin pore detail once my proportion and the second detail were complete. I employ the method that Pierre Alain Reymond taught me while I was a student for this.
I smoothed the buccal region and the inside of the eyes to minimize any gaps that can cause interference during projection. I was able to recover the free model of basic mesh at the Texturing XYZ site. I utilized a set of XYZ textures because they work well with the base mesh's UVs. To make the base mesh of XYZ suit my sculpt, I first utilized the ZWrap plugin.
The method involves accurately adjusting my base mesh to fit my sculpt by positioning a point in the same location in the alternative mesh for wrapping.
I exported my low-poly sculpt and the facial mesh. I created a bake project in Marmoset using the VFace mesh in the high folder and my sculpt in the low folder. I bake the albedo after importing a displacement map into the face texture's albedo. I was able to modify the texture map to fit my sculpt mesh's UVs thanks to this procedure. The warp effect does, in fact, cause the skin pores to adjust to my face and be properly positioned.
I then extracted the three maps by separating my textures. Three maps with varying levels of detail – primary, secondary, and tertiary – can be found in a single image of displacement in the red, green, and blue channels. I extracted the map with Photoshop.
Because ZBrush learns the map differently and flips the image, I flip the image vertically right in Photoshop.
Second, I put my map in the primary layer of ZBrush's Displacement Map menu and baked it in a new layer. I then repeated this process in a new layer for the secondary and tertiary maps. Thanks to this, I'll be able to manage the skin pore details and layer intensities.
In the third, I add a final touch of information, like moles or alphas. For instance, I restored the lip zones, but the projection would not function for the ears. In order to provide additional realism, I also inflate the skin's pores.
Hair
I used XGen in Maya to produce the hair. This is my first attempt at making the XGen hair, and I only use it to make the haircards.
I utilize the Kubisi art videos to assist me in making peachfuzz and eyebrows. Multiplying my mesh while maintaining the area where I want the hair to be formed is a traditional method. I remove the additional polygon.
Following my selection of this mesh, I made a new XGen description and chose the placing and shaping instructions to manage the primitives. For example, to create a more accurate representation of the eyelashes, I positioned the guide and used various parameters and modifiers.
The remaining hair is the fundamental method for determining the ideal parameter for African hair. Although I modified it for my character, Isaac Olander's tutorial is a nice place to start when learning about the modifier.
For this haircut, I made several collections: the braids, the baby hairs, and the baby hairs of the braid. I also made two varieties of the "ponytail," one that was denser and one that added more variance.
I walk through the procedures to create the haircut, making several passes before I get a good render. I spent many days at work. Starting each day with a new and well-rested eye is important. I utilize Andrew Giovannini's guide for exporting and integrating in Unreal Engine. He will explain things far better than I can, so I highly recommend you look at its tutorials.
Body
To save time, I begin with a mesh for the body that has good topology and skin detail. I posture them and create the foundation mesh in the first iteration. I adjust the body to the desired stance and the proportion I'm looking for. The reference is crucial in this stage and every other phase.
I looked for a way to preserve the model and position it in T-pose after the direction shift so as not to lose the nice UV and texturing. I use AccuRIG for this. It is a free program for character auto-rigging. This entails positioning points in particular locations, like the elbows, wrists, etc.
This allowed me to import the character rig in Maya and easily add a T-pose for my character. I just repaired the skinning and modified the mesh. For greater accuracy, I removed the character's history so that it could be detached from the rig and exported the mesh for use in ZBrush. There, I can use the tools to move and shape the body and more quickly replace the hip and shoulder in the proper locations.
I link the body to the rig after importing it into Maya. You migh ask yourself; but how can I get the skinning back? I used the free Maya plug-in SkinTools before erasing my history. This plugin preserves the skinning and reapplies it using UV projection after reimporting my mesh. This method works since the rig bones and UVs don't change.
The body is now in T-Pose. Now, I can make and alter clothing and accessories.
Clothes/Accessories
I used Marvelous Designer to construct the clothing in my initial attempt at this project, but after making some changes, I looked for ways to make my character come to life, which also affected the clothing.
In order to generate two thicknesses and two colors, I created a basic plane. A semi-cylinder will be the cape. It'll be used with chaoscloth, simulated in Unreal Engine.
I couldn't continue to believe in Marvelous Designer because if I didn't, I would wind up with folds in the normal that I didn't want, and I would change my mind about the character's DA and top.
I changed the clothes and the DA. I kept the assets I had already made and used them to make a new one for the outfit.
To sculpt the accessories, I used ZBrush. Considering the creative style I choose, I don't include a lot of detail in my sculpture. I like to use textures and to add details.
When making these kinds of accessories, I like to start with a mask to define the shape of the accessory on the surface of my body mesh. Then, I make an extract with zero depth and smooth the border with a tool polish. I then ZRemesher this with low resolution and moved it to adjust the mesh to the desired shape.
After I use the panel loops to enhance thickness, I love using the dynamic SubDiv; I mark my creases and set the crease levels for the accessories to a maximum of 1 or 2. Once my division is applied, I sculpt if necessary.
Retopology & UVs
I retopology my assets mostly to save time because, as previously said, the body already had an appropriate topology before baking or texturing. To generate a suitable topology for this stage, I utilize Blender and, specifically, the Retopoflow tool. Although it was not the objective of this project, I know my topology isn’t as good as I often make it. Here, I prioritized reaching high-quality texturing and rendering.
I have five maps for the UVs: three in 4K for the body, one for the face, and the final one for the eyes. I packed and straightened my UVs with Rizoom UV. I discovered this software in my studies, and I'm in love with it.
I didn't specify polygon limitations or map resolution, consequently this project isn't game-optimized. I look to test my knowledge and learn new techniques.
After, I used Marmoset Toolbag for baking. I get ready in ZBrush, so I may apply various colors to various aspects of my character. This will enable me to prepare the vertex color map and create my ID map later. In order to identify issues, I baked the maps and didn't think twice about changing between the Substance 3D Painter and Marmoset.
Texturing
I think texturing is the prime time when a character takes life. I had to experiment with a number of colors and textures before I was happy because I lacked a clear concept. I used OpenGL in Substance 3D Painter.
I start primarily with a basic paint and prepared material, then remove the unused functions of the material and make it mine. I make adjustments by adding features, color variations, and roughness to give the character life through textures. In Multiply, I like to add an Ambient Occlusion layer.
In order to generate contrast, I also add two layers: one to simulate light on top and a darker one on the bottom. There aren't any restrictions when it comes to texturing; you just need to be familiar with your software and have solid references to know how a piece of clothing or accessory will respond.
To select an appropriate hue for the face, I use a Texturing XYZ map in conjunction with the texturing I made in ZBrush using the model's image's spotlight. Since I don't have a dark skin map xyz, I make a layer in passthrough and adjust the color and lightness using an HSL parameter. Using the blue, red, and yellow patches, I replicate the skin's color fluctuation over this.
I also look for ways to add makeup. While I was creating this figure, I experimented with several makeup looks.
Unreal Engine
I chose to render in UE and animate the character once I decided to change the project's direction. I worked on the texturing and animation simultaneously while rapidly integrating the character into Unreal Engine. I switched between a lot of tools for this project in particular, and I wasn't scared to go back if I could.
A. Head
In Unreal Engine, I start with the MetaHuman head. I had not used it prior to this project. Its many potential applications pique my curiosity. After the test, I simply retrieved the rig after searching for a way to animate the head. The plugin uses my sculpt to effectively construct the MetaHuman head. I placed the plugin's markers on my sculpt, and it produced a new MetaHuman mesh. However, I believe that I lost shape detail as well as the UVs or textures I had made. Therefore, after testing and finding that reflection is impossible, I looked for a way to transfer my UVs to the new mesh.
For the body, I used the same method as before. After properly saving the skinning, I insert my head, connect the rig bones, and export the MetaHuman rig in Maya.
Unbinding the MetaHuman mesh
After I imported the skinning for this mesh, the procedure isn't exactly the same, but it follows the same premise as the body. I transfer the weights between the MetaHuman head and my head using the plugin NGSkin tools menu. To do this, I have the joint, rig, and mesh of the MetaHuman head in my scene, along with a copy of the rig bind to my mesh head.
Because the same rig just copies, it is crucial to confirm that the bones match. I don't have any issues in this situation. I can't use UVs because they aren't the same, and I can't use vertex ID, so I use the mode closest on the surface for the vertex transfer.
I simply clean the skins and fix this after that. I import this rig with the updated mesh into Unreal Engine. I have to tweak a lot of stuff in the UE for this to be functioning. I start by reimporting the old rig's DNA MetaHuman into my new skin. I made copies of the post-process blueprint, the control board, and the animBP. Then I modified them to fit my updated mesh. I tested a variety of methods and used a number of tutorials; I don't get into specifics because that might require another article.
B. Body
I used the MetaHuman blueprint. I modified my mesh and made a replica of this. I started importing it with the rig for the body, keeping the jewelry head, the rope, and the head's body separate. Also, I made the effective decision to animate the rope.
I add the chaotic cloth simulation to the fabric, that is, the body. I made two chaos cloths: one for the dress, cape, drape, and shoulder cloth. Additionally, I applied the impact to the small skirt for the second body. Before I locate a good render, I try a variety of experiments. When I create a cloth asset and test the chaos cloth, I don't see how clothing could collide with the environment around it. Unfortunately, I think the simulation's rendering looks better.
I therefore utilize skeleton mesh chaotic fabric. The technical instructions are straightforward: first, you must enable the clothes menu in the window. Next, you must right-click on the mesh where you want to have the simulation and build clothing data from the selection. Finally, you must choose the character's physics. Simply paint the influence after clicking the "activate cloth paint" button. The clothing simulation parameter is modifiable; I verify and adjust it primarily using self-collision. It's merely a test to determine the ideal render's parameter. Due to my thickness, the challenge here is the penetration between the cape and the dress as well as the cape's self-collision.
I use the modular rig editor in UE for the body control rig, and it's indeed easy to use. You track the spine in the module asset section, for example, and placed it in the rig's spine joint, creating a socket. The control is automatically created by the module.
This is a new method for me; it allowed me to create my character's stance straight in UE instead of going back into Maya like I used to do.
C. Rope
For this, I made the decision to animate the rope. I skinned it after placing a number of joints in Maya. I looked for a way to make a dynamic simulated rope in Unreal Engine. I watched a YouTube instruction. I had an issue, so I put an intersection all the way along the rope and tried to replicate the bottom. I have to tell UE only to animate the joint I selected in my character's blueprint. I sought a tutorial online and searched several technical sites for this, but there aren't many.
I came up with a solution: I made a socket in the skeleton mesh and made a hierarchy of nodes in the blueprint to show which nodes to emulate. It is not simple for me to explain, the image will be easier to understand.
D. Blueprint
As previously said, I modified the MetaHuman blueprint. I imported the jewelry head mesh, the body, the hair, the groom, the rope, and the head in skeleton mesh. The hierarchy is crucial. The head is the body's child, and the jewelry, hair, and rope are the head's children that follow its movements. Another issue I have is that the head's animation needs to match the body's movement. To identify the best method that works, I look for and test a variety of approaches.
I also encountered an issue with the head's rig's hierarchy of joints. Because I reused MetaHumans, I had to complete the hierarchy in Maya well in order for the control rig to function. Since I had a duplicate of the UE MetaHuman, I'm not sure how I was able to move up the hierarchy.
E. Shader
I made an advanced shader for the skin. In order to make it more realistic, I used the Displacement Map technique described in the interview to retrieve the map of the XYZ zone after baking it in Marmoset to adjust to my UV.
The shader's functions include adding a different roughness and specular to each area and separating the map's red, green, and blue channels. I have a map of XYZ, a coat pore, and overall roughness. I have three basic parameters for the color: power, desaturation, and intensity.
The normal is the combination of multiple maps. I can regulate the intensity and the tilling because I have a normal level of micro detail.
I use Substance 3D Sampler to turn the three displacements into a normal map, which I then import into ZBrush. Each map's intensity is under my control. Additionally, I integrated it with the final map that I exported from ZBrush.
The subsurface is the profile of MetaHuman which I tweaked to adapt to my color of skin.
The eye is the second shader and the most complex. The MetaHuman's material is recovered, and I made some adjustments to make it fit my textures. After adjusting and testing the parameters, I am able to adjust the shader to my texture.
I made a gem texture for my character just for fun and to provide additional lore. I used the renderBucket instruction on YouTube. I adjusted the parameters and allowed my imagination to go wild since there was a solid base of work.
Rendering
Lighting
Every time, I start with basic lighting, including the key, rim, and fill; basically, a three-point lights light scene.
Even though I add lights to fit my characters or shapes. In some situations, I like to use the spotlight, rectlight, and point light. I was aware that in order to get the greatest render, I needed to adjust the lighting to the camera, especially the rim light for the head closeup.
Level Sequencer
I use the level sequencer to render in UE. I include only my camera and my blueprint. The rig's control does not activate automatically when I enter my BP for the body. To do that, click the plus sign, navigate to the control rig, and choose the control rig that we previously thought was appropriate. I can then select the control from the level sequencer's menu. I position my character's face and body in the timeframe, making sure to include the keys at each location. The idea is the same as in MAYA. Additionally, I construct the render using the basic option in a movie render queue.
Post Processing
I don't have a complex post-process. I simply use Photoshop to make a dark vignette. And because the clothing simulation is unpredictable, I have to do a lot of render attempts. I move the model during the simulation to get a nice cloth placement and render between 100 and 150 frames for a single image. In order to have a nice drape position, I had to put together multiple shots in Photoshop.
The Final Result
Conclusion
To be clear, I am not an animator, rigger, or an Unreal expert. In addition to learning a lot of new skills, this project allowed me to validate my accomplishments and give myself confidence in my abilities. On certain days, I make rapid progress in certain areas, but on others, I can get stalled for two or three days due to an issue. Without anyone else's help, I created this project in my spare time. I asked for assistance with the lighting and texturing towards the conclusion of the project.
For a novice, my project is not a good goal. It's lengthy and chaotic, but if I have any advice, the most crucial thing that you should do is to explore and simply enjoy.
If it's possible, I'll seek more input from friends or experts if I decide to rework this project. Receiving feedback is always beneficial and enables you to perform at your highest level. To sum up, I think I've seen the whole story of this project.
A big thank you for this interview, 80 Level, and thank you to all the people who support me and who will take the time to read this article. You can follow my journey on ArtStation and Instagram.