Ron Frölich walked us through the Tribuzio project, detailing the techniques used to bring a uniquely designed 19th-century pistol to life using Blender, Substance 3D Painter, and Marmoset Toolbag.
Introduction
Hey, hey, 80 Level readers. My name's Ron, and I am a Principal Hard Surface Artist in the game industry. I currently work at Remedy Entertainment in Finland.
I first got into 3D art during the Counter-Strike 1.6 modding days, when people made custom skins for the game's weapons. Making something in 3D and then seeing it being a part of a game was incredible to me at the time. That fascination stuck with me, and so I kept modeling different props and weapons and eventually built up a portfolio that landed me an entry-level position at Crytek in Germany in 2007. This gave me the opportunity to learn a lot from a ton of talented people and work on games like the Crysis series, Ryse: Son of Rome, and Hunt: Showdown.
A couple of years ago, I joined the Hard Surface team at Remedy, where I worked on Alan Wake II and the Max Payne 1 & 2 Remake.
Inspiration
The inspiration for the project came from a show called What Is This Weapon? on the Royal Armouries Museums YouTube channel. While the Tribuzio itself wasn't featured there, they nevertheless sparked an interest in me to model a weapon that is as unusual as the ones often shown in their videos.
I started searching through online auction house catalogues for references to cool-looking guns and eventually stumbled across pictures of the Tribuzio. It fit what I was looking for to a tee, a quirky-looking gun that I haven't really seen modelled before. I found a few more pictures of different versions of the gun online and also came across a Forgotten Weapons video featuring the gun, which showed some additional angles not shown in the auction house pictures.
For personal projects, I like to try to model these weapons to a degree that allows me to fully disassemble them for one of the presentation shots. Figuring out exactly how a gun like this functions is a puzzle in itself and can be a little challenging. However, it also gives me a better understanding of engineering and mechanics in general, which I can then apply to other projects.
I usually think of the blockout process as "just modeling everything". In practice, this means building all the details as if they were a high-poly while making sure all curves and cylinders are round enough so I can use them in the final low-poly. The blockout phase is only about building the forms and details. Shading and topology do not matter to me at all yet. In fact, I keep all my meshes flat-shaded during this process so I'm not distracted by any shading issues that may come up.
Blender, and especially Box Cutter, allows me to be very flexible and loose while working on the blockout. One example of this is how I use Bevel modifiers when I'm building the body of the gun. Its silhouette is made of several curves smoothly blending into one another. Instead of trying to build these curves in a destructive way by hand, I build a very low-poly version of the body's silhouette and add a Bevel modifier that is controlled via vertex groups for each curve. This way, I can tweak a very simple shape until I'm happy with the silhouette. Since I'm keeping the modifiers live, I can also retroactively change the number of segments in each modifier to make sure I have a consistent vertex distribution
Another very handy tool to keep things fast and loose is using Blender's Boolean modifiers, especially in combination with Box Cutter. When modeling more complex objects, I often only build a fairly simple basic shape and then move into boolean modeling. This way I can create complex shapes while working with a handful of objects that themselves are relatively simple. Together, these objects compound into a more complicated whole.
Blender uses modifiers for boolean operations, so it's very easy to change or reorder operations to keep things logical and easy to work with.
Since everything is kept live and non-destructive, it's also easy to start cutting without fearing that you might ruin your mesh or have to undo a lot of things. Boolean modifiers also don't care much about topology, so it's easy to focus on shapes and get the asset to look right. Sometimes, the easiest way for me to start working on a complex part of an asset is to just cut a box into it and take it one extrusion at a time.
In reality complex objects are often machined from basic shapes as well, and working with Blender and Box Cutter feels like a digital version of that machining process. For example, the bolt of the pistol started as a cylinder that I just cut chunks out of.
These two methods, combined with some simple poly-modeling, usually get me to where I need to be during the blockout stage.
Another important step I do during this stage is naming my objects. Unique object names obviously make baking a little easier later down the line, but more importantly, for me, it makes sure I understand what I am modeling. When I confidently name something "hammer" or "bolt", that means I did the research and understand what this part does. When I don't know what to call something, I probably don't understand what it does well enough, which means there's a chance I didn't model it well enough, either.
High-Poly
With my blockout done, I duplicated the entire collection and started working on the high-poly. Since all my detail has already been built in the blockout phase, I'm mostly focusing on smooth bevels on corners and shading at this stage.
Nowadays, I almost exclusively use one of two different techniques for this kind of asset: Bevel & Weighted Normal Modifiers or a Lazy Remesh approach. Traditional SubD modeling is fairly time-consuming, so I use it only as a fallback when I cannot achieve the results I'm looking for with the other two methods.
Bevel & Weighted Normals
I usually try to create a high-poly mesh using this method first, as it is the quickest and simplest. It is also the technique that will most likely yield a mesh that will bake very cleanly because it will be very close to the low-poly mesh in volume and silhouette, reducing the chance of common bake issues like waviness on edges.
I start by adding a Bevel modifier to my mesh and set its Limit Method to Weight. After that, I can start selecting any edges I want to beveled and simply assign a Bevel Weight to them to have the modifier create a nice, rounded edge. The Bevel Weight amount determines how wide the bevel is going to be.
There are a few settings in the Bevel modifier that should be considered. I usually turn off Clamp Overlap and Loop Slide since they rarely work in my favor. Setting the Outer Miter Type to Arc also usually generates a nicer mesh.
The Bevel Modifier alone often doesn't result in perfectly clean shading. In this close up you can see that there's a pretty nasty shading error at the front of the extractor. This is where the Weighted Normal modifier comes in handy. It takes the Bevel modifier into account and changes the mesh's vertex normals to create a cleaner shading with fewer gradients.
In order to get the most out of the modifier, I set its Weighting Mode to a Corner Angle and turn on the Face Influence option. The latter works best when the Face Strength Mode in the Bevel modifier I added earlier is set to Affected. These settings work best for this particular object, but especially the Weighting Mode and Face Strength Mode seem to depend heavily on the underlying geometry, so it's worth playing around with these settings to find the one that yields the best results.
Lazy Remesh
This has become my go-to approach for more complex meshes that don't work well with the Bevel modifier approach. It is a little more involved and takes a little more preparation time, but it's still very straightforward.
Because I built my base mesh with the low-poly in mind, I need to add a Subdivision modifier ahead of the Remesh modifier to smooth out any curves and cylinders to avoid a facetted look. The preparation for this approach is similar to the others. I select any edge I want to keep sharp and set its Crease value to 1. With that done, I can add a Subdivision modifier and make sure its Use Creases option is checked.
Here's where playing it fast and loose with topology comes back to haunt me, though. Not only is this not a mesh made for remeshing, it's not made for SubD either. There are overlapping faces everywhere; it's a completely broken mesh, and I'll have to face the music and accept that I may have to clean this up a little bit.
However, before I start to clean up manually, I want to see how much Blender can clean up for me by adding a Triangulate modifier to the object ahead of the Subdivision modifier. With its Minimum Vertices value set to 5, it only triangulates faces that have 5 or more vertices. This means that any quads will stay quads while N-gons are turned into triangles. While the wireframe looks hideous, the mesh is holding its shape much better than before. There are a few manual adjustments I still have to make, but they are far fewer than before the triangulation.
Once this is handled, I proceed with a fairly standard Remesh workflow in Blender. I add a Remesh Modifier, set it to Sharp, and crank the Octree Depth to a value that yields enough resolution so as not to cause artifacts when smoothed. For smoothing, I use a Corrective Smooth Modifier set to Only Smooth and play around with the iterations until I get the smoothness I am looking for. Since I added that SubD modifier earlier, I am getting a nicely smoothed high-poly mesh, even though my base mesh wasn't built with remeshing in mind.
To speed both of these techniques up, I made a Hard Edge macro shortcut in Blender (Using the Pie Menu Editor add-on) that does 4 different things at once. It adds a Bevel Weight of 0.1 and a Crease value of 1 to any selected edge and marks it as a sharp edge as well as turning it into a UV seam. I am using this shortcut extensively during my high-poly workflow while also laying a bit of groundwork for my low-poly at the same time.
Since any edge I'd like to remain sharp in my high-poly is likely to also be a sharp edge in my low-poly, and since any sharp edge in my low-poly needs to be a UV cut, this shortcut is saving me a ton of time.
Low-Poly & UVs
On this weapon, in particular, I'm actually spending very little time on the low-poly. I want to keep the low-poly very detailed, so I'm not removing as many geometry details as I would have on a game project, for example.
I also don't have to do any retopology or adjustments on any curves since my base mesh is already built with low-poly geometry in mind. Lastly, because the approaches described above are largely modifier-based, I don't have to remove many support loops or other geometry designed to help support a SubD approach, for example. Once I duplicate all my high-poly meshes and delete their modifiers, I am nearly done with my low-poly on most of these meshes.
Because I have been using the Hard Edge shortcut I mentioned above, my mesh already has a bunch of hard edges and UV seams applied to it, which I can simply continue with and fix to create my final UVs. One option that makes unwrapping in Blender much more convenient is the Live Unwrap checkbox in the top right Options drop-down menu. This automatically unwraps the mesh every time an edge is turned into a UV seam. With it turned on, I usually have my UV editor and my 3D viewport side by side and simply add or remove UV seams and see what the unwrap looks like.
Most meshes need very little manual tweaking of the UVs themselves, however I would definitely recommend supplementing Blender with the UV Toolkit add-on. It has a ton of functions that help with the occasional straightening or aligning of UV islands, which can be a bit of a pain in vanilla Blender. I also use a free add-on called Texel Density Checker to ensure all my objects adhere to the same texel density.
When it comes to packing my UVs, I almost always fully rely on UVPackmaster. Once all my individual meshes are unwrapped, I select them and simply let the tool do its magic. There are a ton of settings that can be tweaked, but I usually just set an appropriate Pixel Margin value and make sure that it keeps overlapping UV islands together. I also turn on Enable Heuristic and keep it running for a few seconds, which is usually enough to get a decently well-packed UV layout.
After the UVs are done, I add a Triangulate modifier to my meshes and send my high-poly and low-poly to Marmoset Toolbag for baking. Adding a Triangulate modifier ensures consistent triangulation between Blender, Toolbag, Substance 3D Painter, and any game engine the asset ends up in. Make sure to tick the Keep Normals checkbox so that any custom normals are preserved when triangulated.
I often find a few flaws in the mesh or UVs during the bake process, so there's usually a little bit of back and forth at that stage, once these are ironed out, my mesh is baked and ready for texturing.
Texturing
After baking my mesh in Marmoset Toolbag, I bring the low-poly into Substance 3D Painter to start texturing using the Specular/Gloss workflow. While a little less intuitive at the beginning, I think Specular/Gloss is a lot more flexible and allows for more control than Metal/Rough.
My texturing process, as well as my Substance 3D Painter file, is separated into five different stages:
- Height Channel sculpt pass
- "Helper" Layers
- Base materials
- Unique wear
- Dirt
The GIF above shows the Tribuzio as well as another gun, the Gaulois, since some steps are better visualized on one than the other.
The height channel sculpt pass is simply normal map detail that I figured would be easier or more feasibly done in 3D Painter than in the model. There is relatively little additional height detail on the Tribuzio, whereas all the engravings on the Gaulois took a few evenings to finish.
More often than not, I forgo modeling small details into my high-poly if I think I can do them in Substance 3D Painter. This way, I'm keeping things flexible and can easily change small details in the texturing process rather than having to go back, change my mesh, and rebake it. I also put a layer with an anchor point directly above my height sculpt layers and use that to drive the Micro Detail inputs in generators like the Ambient Occlusion or Curvature so that they can capture the fine detail added during this pass.
Helper Layers don't affect the textures directly, but they help me keep things clean and organized. I realized that I often use the same handful of grunge and dirt maps multiple times throughout the process when building base materials or using them as overlays for dirt. Instead of setting up their scale and alignment every single time, I set them up once at the bottom of my stack and added an anchor point to them. This way, I can refer to the anchor point whenever I need some generic dirt or scratches for an overlay.
An extension of this is "wear masks". They work the same way, but I'm using them to reuse hand-painted details across all my materials. If you were to take a screwdriver to the gun and scratch across the whole side of it, the scratch would look different on the wood than it would on the nickel, but the shape of the scratch would be the same. Creating a mask with an anchor point at the bottom of my layer stack and reusing that throughout my materials allows me to create different implementations for wear that are driven by one mask. This way, I don't have to keep jumping between layers when painting in wear details.
Base materials are the most fundamental part of the texturing process. In my opinion, an asset should still look good, albeit a little too clean, with just the base materials applied. I usually approach all my base materials using the same process:
- Plain color fill layers to define the material in its most basic form
- Subtle grunge add some texture and variation to the material
- Flash rust or worn metal to show the age and condition of the material
I also added some subtle dirt effects to the base material that indicate how well the object has been taken care of. Doing this as part of the base material step allows me to easily match the dirt to the material and have, e.g., hairs in crevices stand out exactly as much as I want them to be, shifting their colors a little to match the material they're on.
The same process applies to the wooden grip pads. The main difference between wood and metal base materials is that I start wood materials with a photo texture rather than plain color layers. This adds a lot more detail right out of the gate, and the photo texture defines a lot of the characteristics of the final material, so I'm spending a good amount of time trying different wood patterns until I find one that I think fits the weapon well.
On clean assets like the Tribuzio, the base materials are doing a lot of the heavy lifting since there aren't a lot of unique texture details to draw attention. I keep tweaking the base materials until I'm happy with them before moving on to the next step. If my base materials don't feel right at this stage, they also won't feel right if I add a bunch of dirt or wear on top of them.
My Unique Wear is made up of photos that are usually Warp projected onto the mesh and make up the bulk of the scratches and grunges on the weapon. I think using photos and matching them to the shapes of my asset is adding a realistic touch to the textures very quickly. There are often subtle color and shape variations in photos that are really hard to replicate by hand-painting dirt and wear onto your textures.
- I start by applying a photo to an area of the asset as a diffuse
- I add an anchor point to the layer and use that to mask out parts of the photo
- With my layer properly masked, I can add a fill to the stack and define some material properties for this wear element
The effects of each wear layer like this may be subtle, but cumulatively, they add a lot to the overall look of the weapon.
Lastly, I add some final Dirt Layers. On a clean asset like this, the dirt is largely defined in the base materials already and I only add some subtle effects on top. One of them is a simple dust layer that accumulates in the crevices of the weapon.
The other is a layer I'm using to simulate fingerprints. Fingerprints on a gun like this are an incredibly subtle effect that can be hard to spot, but I think it's important not to make them too obvious, lest they read like chalk marks or dusted fingerprints. I'm using a fill layer with a white diffuse at 1% opacity and a low gloss at 3% opacity.
For placement, I'm using the fingerprint alphas that come with Substance 3D Painter and projecting them onto the mesh. Since I've done a good amount of research on the gun, I have a pretty good idea of how one would handle the gun and where fingerprints would appear. I'm also keeping the size of the gun in mind because fingerprints are a pretty good scale indicator, and making them too big or too small can make the whole asset feel off.
Presentation
For the presentation, I'm going back to Blender again. I settled on a style that's reminiscent of museum exhibits pretty quickly. I felt the vibrant red background would contrast nicely with the Tribuzio as well as its counter piece, the Gaulois.
I start by creating a simple fabric material in Painter and put that on an equally simple lightbox mesh that is going to serve as my backdrop in Blender. The same material is used on the small wedge object that’s propping up the gun.
To help me frame the object, I am using a very simple camera setup made up of:
- A camera parent, which the camera is parented to and rotates around
- The camera itself to control the distance from the object as well as the pitch and roll
- A camera target which the camera is always pointed at
- A Depth Of Field target that I can move independently from the camera target
With this setup, I am having an easier time framing my asset since I can split the process into distinct and succinct steps instead of trying to do it all at once.
- Placing the camera target to define what part of the asset to focus on
- Rotating the camera parent into place so the focus part is well in frame
- Moving to camera itself into place to define distance and height
- Moving the DoF target into place to get some nice blurring
When it comes to lighting the gun, I usually try to keep it simple and rely on an HDRI-based setup in Blender rather than placing a lot of lights. Apart from one area light on top of the scene, the screenshot below shows the entire "lighting setup".
I'm using two HDRIs blended together to achieve the final result. The main HDRI is used to bring out the shapes of the model. I try to make sure plane changes in the mesh are clearly separated, with one side lit and the other side much darker.
The second HDRI, which contributes much less to the overall image, is there to bring out material details. If there's a fingerprint or scratch I want to highlight, this is the HDRI I rotate into place to literally make it shine.
Obviously, mixing two different HDRIs together isn't a completely realistic approach to lighting an object, but just like with the camera setup, this allows me to focus on one part of the process at a time.
One detail about the presentation I want to point out in particular is the lint that can be seen on the weapon but especially on the fabric backdrop. While I was setting up the camera for some of these shots, I noticed that I was getting very close to the ground and the flat fabric material wasn't cutting it at this distance. I needed to add a little more "3D" to the fabric.
I start by creating a very basic lint material in Substance 3D Painter based on some photos and put that on simple alpha cards that bend and bulge a little bit to add some depth to them. Using Blender's hair system, I can spawn these on select parts of the backdrop. It's fairly subtle, but in some of the pictures, you can see the lint leaning against or even crossing parts of the weapon, which I think adds a nice touch to the presentation.
I'm using the same material with a different color to also spawn some lint and hair on the weapon itself. However, I'm keeping the amount much lower to not overwhelm the asset with it.
To wrap it all up, I added a vignette and a little bit of lens distortion to the image, as well as some other post-effects in Blender compositor. Using Blender compositor for any post-effects instead of Photoshop is really cool because it allows you to see your final result right there in the viewport as you're tweaking the rest of your presentation.
Closing Thoughts
In this article, I went over the techniques I used for this project, but I want to note that there's no one right way to hard surface modeling. There are a lot more approaches described in a lot more articles like this one, and some will lend themselves better to what you're doing than others. Think of each of these techniques as tools on your belt. The more you know, the better prepared you're going to be. It can be a little overwhelming to keep up with how people are doing things and try to incorporate new things into your own workflow, but this community is very willing to share their knowledge, so everything you need is usually just a Google search away.
One thing I only realized while writing this article is how much of my workflow is about separating and compartmentalizing different parts of the process into their own chunks. There are clear, distinct stages during modeling, during texturing, and even during camera placement for my presentation shots. I think this is something that helps me focus a lot since I only really have to care about one thing at a time and trust that things will come together if I just stick to the process. If you find the overall process overwhelming, maybe breaking it down into chunks and only dealing with one chunk at a time can help you focus as well.
Thank you for reading this rather lengthy article. I hope you enjoyed it and learned something from it.