logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Props for Games: Sculpting, Retopology, LODs

Dennis Nollen prepared a breakdown of his prop Aegis of Vigilance based on the Ashes of Creation concept and talked in detail about blockout, sculpting, retopology, LODs, and texturing.

Introduction

Hello, my name is Dennis Nollen. I have been drawing and doodling art since I could crawl to pick up my crayons. I went to college for a bachelor's degree in art and animation roughly two decades ago, then started freelancing and have been lucky enough to meet, learn from, and work with some of the most amazing people ever since.

1 of 2

Aegis of Vigilance: Pre-Production

Normally, I work on projects which allow me to touch upon every aspect of production, from concept and design all the way through to implementation in the engine. At the same time, I knew I wanted to add a fantasy-themed piece to my portfolio and work on a project where I would have to match another artist’s concept rather than working from my own. Being a fan of the work the guys and gals at Intrepid Studios have been up to on their Ashes of Creation project I settled on their Aegis of Vigilance concept found here.

I liked the idea of doing a shield because I could follow the concept for the front part as well as come up with a design for the back of the shield. In my experience, starting a reference sheet should either come right before or right after the concepting phase and can really help solidify the direction of the project. I made a reference sheet with the concept, some prehistoric bones, and other materials I thought could be incorporated. 

Blockout

Scale and proportions are just as important in the environment and prop design as they are more commonly thought of in character art. This is why I always recommend taking time to do a blockout of your project - it may take up some of the production time but it makes it easier to spot and resolve any possible design conflicts before things get complex. I knew UE4 was the end destination for this asset so I started with Epic’s default mannequin in a blank scene and box-modeled some simple meshes to start the shield. This is where the blockout phase allows me to check things such as:

Will the shield clip into the character mesh? Does the distance between the grip and the arm brace match the mannequin's forearm? I also check basic functionality and how all the pieces fit together.

Once I am confident with my blockout, I export those meshes to use as a reference during the sculpting phase. 

Sculpt

I try to think of digital sculpting the same way as I would think of the traditional one. It’s just clay: it can be shaped, added to and taken away from, and so on. This mindset helps me not get too attached to any brush stroke while I am working. Two things I would recommend to any artist trying out digital sculpting are:

  • focus on large forms first, make sure they read well before moving on to any other smaller forms
  • try to keep your mesh light in triangle count for as long as possible; not only does this help keep files performant but you can also avoid having a too noisy or busy sculpt just for the sake of having details

When I started my sculpt, I had a concept image in a lower resolution than the one I eventually found.  n the low-resolution concept, I could tell there was some form of detail on the lower part of the shield but I couldn’t tell what it was. I knew this shield was rugged and battle-worn so I grabbed my Clay Buildup brush and started sculpting some angular features until I got something that felt both battle-tested and “badass”.

I wanted to optimize this project for games and was going to mirror as much of my UVs as I could to save texture space and allow for the highest texel density possible. This means that as I sculpted details I had to avoid adding unique details along the axis being mirrored that would look repetitive. I had simple meshes from the blockout phase but I still needed detail pieces such as the leather straps and forged metal pieces.

To get the general shapes needed, I used ZSpheres. The wonderful thing about ZSpheres is that they are non-destructive. You can move and scale them however you need and when you get to a point where you want to sculpt, make a copy and then apply Adaptive Skin which creates an eight-sided cylinder at a low resolution. After doing that, I was then able to take ZModeler brush and delete four of those eight edge loops to create the four-sided leather straps quite easily.

Anyone looking for another tool in their ZBrush arsenal should definitely give ZSpheres a try as they are incredibly powerful. As for brushes, Clay Buildup is without a doubt my most commonly used brush that makes building up forms incredibly quick along with Move Topological brush to nudge forms as needed.

Once I have my large and medium forms laid out I can then move on to sculpting to define surfaces and what they represent. The bones, for example, were a series of brush steps starting with Clay Buildup for the overall form and shape, then MalletFast to create harsh denting and pitting. HPolish was then used to break up the harsh transitions and create places for visual eye rest, as well as refine the carvings in the bones themselves. At this point, the bones would be most of the way done with a few more brushes needed to get 100%. Those couple of brushes were Smooth (used where a bone socket/joint would meet), MAHcut Mech B (used to define bone fractures and cracks) and a final Hpolish pass to tidy up. This final detailing pass brought a sense of realism helping to avoid the sculpt looking too stylized.

There isn’t a correct series of steps when it comes to sculpting. Sometimes, I can get what I am looking for with one brush and one stroke, sometimes it is five brushes and many strokes. Here's a quick speed-sculpt that shows a couple of the brushes I used and some of the ways they can be used:

Retopology

Oddly enough, I must have been blessed in some weird way because I actually enjoy retopo quite a bit.

With that being said, there is a couple of key things I keep in mind when approaching my retopo. Firstly, mesh density and polygon/triangle budget. Figuring how many triangles you can spend will help you understand how detailed you can get with your retopo. If you have a budget of 1000 triangles and it is going to take at least 800 triangles just to form your silhouette, there is no need to waste time detailing something that will not fit in the budget.

This leads to the second key point: silhouette and deformation should be the priority number one when it comes to spending your triangle budget, followed by key features the viewer will interact with. It makes no sense to spend triangles on the inside of a shoe that no one will ever see when those same triangles could have been spent on key features the viewer may see up close.

One more key thing to keep in mind: do not skimp on spending triangles where a lack thereof may cause baking errors later on. This is kind of a vague statement that is hard to explain fully without writing a multi-page technical document on the ins and outs of normal baking. Yet, for anyone interested, I would suggest going to Polycount where you can find a plethora of information on this topic.

With these key things in mind, I start my retopo nailing down the silhouette and large forms, adding geo for deformations if the model is to be animated, and then spending the leftover triangle budget on smaller details the viewer will interact with. One last question I ask myself while doing retopology is: Do I need triangles to define this surface or can the normal map do the same job? If the answer is that the normal map can do the heavy lifting I tend to try and conserve those triangles to be better spent elsewhere.

LODs

Games use LODs or “level of detail” meshes to swap in and out depending on how close they are to the viewer. It helps save resources and keep frame rates as high as possible. These days, triangles are fairly cheap in the overall computing cost so the goal of my first retopo was to end up with a “hero” version of the asset which would be used on a character equipment screen, possibly an item crafting screen. A hero version is not designed to be used in large quantities and you wouldn't want twenty players on the screen with hero versions eating up resources, so once that part is done, it’s time to make our LOD 0. This is the mesh a game would use with its graphics set to maximum. Usually, a game will have LODs 0-3 with each higher-numbered LOD using fewer and fewer triangles and texture resolution. In a production environment, Auto-LOD tools are a viable option, especially for the LODs the viewer will see from a distance.

Generally speaking, once I have my “hero/LOD 0” mesh, I lay out my UVs, load up Marmoset Toolbag, import my high- and low-resolution meshes, and start doing test bakes making sure no errors/artifacts are present. This is another topic full of technical details that can be talked about a lot, so I am going to try and speak in general terms.

One of the goals I want my UV map to accomplish is to have shared texel density across the entire asset. To give a rough idea of industry texel density standards, the third-person camera is roughly 512px/m, top-down camera - roughly 128-256px/m, and first-person camera - roughly 1024px/m. Normally for games, a project will have a set texel density goal defined for the entire world which an art team is adhering to. If you are not sure what texel density you should be aiming for, art leads/directors and techs will usually have this answer.

Secondly, I want to maximize mirroring as much as possible, getting as much texture resolution as possible.

Thirdly, I want to get plenty of space between my UV shells to avoid any artifacts the mipmap process may create, as well as grouping shells by their material types. Putting UV shells of similar material types next to each other helps avoid texture bleeding when a texture is downscaled in the engine.

Once I am happy with my UV layout and LOD 0 bakes I can then make a copy of my LOD 0 and use that as a base for my LOD 1. I don't want to get into software specifics but most programs have a function to subtract/collapse a mesh’s verts/edges while retaining the UVW mapping information. This part of making my LODs is basically taking away the least noticeable verts/edges possible until I hit my target triangle reduction goal. I then copy my LOD 1 and use that as a base for LOD 2 and so on and so forth. Generally speaking, I aim for a 25-30% reduction in triangles at each LOD stage. If LOD 0 is 100 triangles, LOD 1 should be around 75 triangles and LOD 2 - around 50 triangles.

When I have my bake maps exported from Marmoset Toolbag (AO, normal map, curvature, height, position, thickness, material ID, concave, and convex), it is time to jump over to Substance Painter and start the texturing process.

Texturing

No matter what style I am going for in a project, I always start out by setting up my layer structure. This is mostly the process of creating a folder for each material type an asset will require: a folder for wood pieces, a folder for metal pieces, and so on. If you have a material ID map baked out you can use Substance Painter's mask by color selection feature to sort all this out in a matter of minutes. Once my folder/layer structure is sorted out, it's time to start texturing.

My Substance Painter workflow really depends on the art style of the project. If I am going for realism, I tend to rely more on PBR smart materials I have authored both in Substance Painter and Designer. This not only speeds up the production but keeps world style consistency between assets as they can share the same materials.

When it comes to a more stylized work, my process is much more artistic. I try to use fill layers and masking so I can build up multiple layers of color harnessing the various masking techniques in Painter to really control how those colors interact with each other and get applied to the asset. One of the reasons why I prefer a fill layer and masking workflow compared to normal paint layers is that they are the least destructive. You can drag a completed mask from one layer/material to another and then modify that mask further on a different part of an asset thus saving time. I am able to drive masks by my bake maps, such as curvature, AO, etc. I can also modify my bake maps with filters and effects to make the colors exactly how I want them to blend while being completely non-destructive.

On top of all these aspects, if I want to add custom brush strokes to my mask, I can add paint layers to my mask stack, getting the best of both worlds.

Let's take the texturing process for the lower front part of the shield, for example. I started with a very simple greyish blue fill layer as a base for my metal. It was then just a matter of building up my color layers with individual masks to match the concept and feel I wanted to achieve. One of the common ways to harness the power of masking in Substance Painter is to drive a layer mask by my baked curvature map plus the levels' effect. Another technique I use constantly is to use my baked position map to drive a smooth gradient mask to blend colors. These are sometimes called top-down masks or grounding masks as they can give a sense of direction/weight to a layer.

I would also like to mention the fact that you can get great results with a few fill layers and a couple of masks, but if you need to build more complex results, it is also possible to layer and even nest your current layers in a new folder, and then apply additional masking to the newest folder in that layer stack. Another great aspect of using baked maps to drive masks on fill layers is that you can easily create stylized smart materials that can be applied elsewhere later on with the same results thus saving tons of time. The material for the bones was made with only one major bone piece being visible - when I was happy with the result, I simply modified my folders masking via color selection to apply it to all of the other sections of the bone material.

One tip regarding the PBR workflow: I cannot stress enough the importance of your roughness maps and just how powerful they are in achieving a polished end result. Roughness maps tell the viewer exactly what kind of surface they are looking at and in many cases, these maps are responsible for making the audience either believe or not believe the surface. I find that while I am creating my roughness maps, it is a good idea to single these maps out. I generally toss a dark gray fill layer on top of my layer stack with a “passthrough” blend mode.  This is temporary and allows me to see exactly how my roughness map is affecting the light hitting my surfaces. Is there enough contrast in the roughness map to be pleasing to the eye? Is there too much contrast? Possibly confusing the viewer? And so on. I can check if all the values are in the right ranges so they read as the surface intended and adjust them if needed.

Another aspect of creating content for game engines is to check your assets in the engine. If I considered my projects finalized straight out of Substance Painter, I would be in for a rude awakening. While PBR materials and programs like Substance Painter minimize the deviations in the outcomes to a point, adding a final adjustment pass in Painter after checking your asset in-game will only help your final outcome.

Closing

I hope this article will help some artists out there looking to create game-ready assets or break into the industry. I would like to thank 80 Level for giving me the opportunity to contribute to the community. Websites like 80lv are an invaluable resource for artists of all levels and I am happy to be a part of that. 

Dennis Nollen, Environment/Prop Artist

Interview conducted by Kirill Tokarev

Keep reading

You may find this article interesting

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more