Kestutis Rinkevicius explains how every tiny detail of Nosferatu's portrait was designed and shares his favorite online sources that help with texturing.
In case you missed it
You may find these articles interesting
Introduction
My name is Kestutis Rinkevicius but people call me Kestas for short. I am a Principal Character Artist currently working at Playground Games. Before joining Playground Games, I worked on various games, VFX, and commercial projects, one of my last freelance projects was character work for the new Total War: Three Kingdoms developed by Creative Assembly.
Creating Nosferatu
The original inspiration for this character came from old vampire movies and the 1922 Nosferatu in particular. I wanted to achieve a good balance between creepy, scary, and even slightly goofy looks in this character. I did not use a particular concept so gathering references from the internet and movies was my first step. To manage all the references, I use an awesome program called PureRef.
If I design a character myself I do a quick DynaMesh to capture the initial idea, after that, I leave it for a few days to settle. If I still feel excited about the character after a few days when looking at it with fresh eyes, I continue to work on it further.
Face Sculpting
I always start my head with a sphere or a head primitive and convert it to DynaMesh. I find that in the initial stages of exploration it is better to use DynaMesh because it is free of any topological constraints so it is much easier to explore ideas. Here are a few WIP screenshots from the early stages of the head sculpting: the left one is the first ZTL save and you can kinda see it is much closer to the original Nosferatu from the movie. When sculpting, I like to focus on the bony landmarks and skull structure so I actually have a real-size skull on my desk at all times and it is an awesome reference.
Adding the Details
Once the character is blocked in DynaMesh, Wrap was used to wrap a proper topology onto my sculpt. I chose a mesh available on 3D Scan Store because for this project I wanted to try using real-world scan data to create pores so having a mesh with the same UVs as the scanned assets was a huge time saver. I work with layers in ZBrush so every detail pass is isolated in its own layer, this workflow makes it really easy to change the model without destroying the pores and it also makes it easier to add things like expressions later down the line.
Outfit
The outfit was initially blocked out with DynaMesh to work out a general shape and then finalized in Marvelous Designer. When I was happy with the Marvelous Designer sims I retopologized everything and created the UVs in preparation for the texturing stage. I would really like to stress the importance of having nice and straight UVs for cloth. When unwrapping cloth parts I pay attention to the orientation of the UVs. It is best to avoid any odd angles and stick with 90-degree rotations to get the best and most realistic cloth direction. Marvelous Designer simulations do not always have the best flow so my last step is going back to ZBrush and just fixing up the shapes, proportions, and flow of the wrinkles.
Initial Marvelous Designer simulation before jumping back into ZBrush
Hair
For all the hair work in this project, I used the Ornatrix plugin. Although this particular groom was not too complicated since it is so sparse, I still went through quite a few iterations until I found a look that I thought worked. I utilized the Ornatrix Hair Strand IDs to have different Frizz settings for different groups. I find that it is a really easy way to have localized control without having to paint any texture maps.
Texturing the Skin
For the skin details, I used a combination of pores from 3D Scan Store and a tiled Micro Bump Map from Texturing.xyz. To transfer the details from the scan, I simply generated a Displacement Map from the scan and then applied it back to my model. One thing to keep in mind is that a Displacement Map generated from the first subdivision level is going to have more secondary details, therefore I used the third subdivision level to get rid of any secondary details and only keep the fine pores in the Displacement Map.
Here is a gif comparison between Scan Diffuse that was used as a starting point and the final repainted texture.
Eyes
For the eyes, I used one of the Texturing.xyz UHD irises as a starting point for my iris, they do come with color maps but I chose to paint the color myself just to have maximum control over the look. All the painting was done in ZBrush using Polypaint, later I exported the Polypaint as a Texture Map and generated a Displacement Map.
Iris sculpt and Polypaint in ZBrush
The sclera texture was created in Substance Painter, I find that using procedural marbles that come with Substance Painter is really nice for creating veins. As soon as I am happy with the general veins I copy the same layer, blur the mask a little bit, and reduce the opacity so it creates a nice and subtle outline over each vein.
Instead of using a simple gradient ramp for opacity I prefer to use a hand-painted opacity mask, I find that it gives a much more natural result with little intricate details like little veins and noise.
Cloth Texturing
The cloth texturing was done mostly with Substance Painter to paint general colour. Then a tiled Texturing.xyz fabric texture was used as a micro displacement to create the nice and crisp cloth details. Since I wanted to save on texture memory, using a smaller tiled texture worked really well. I find that adding fuzz and fluff balls to cloth really helps to sell it and makes the silhouette pop nicely with a rim light. I used the Ornatrix Strand Propagation to scatter the small fluff balls.
Here's a side-by-side comparison showing how much detail can be squeezed in just by using tiled micro displacement.
Below is a screenshot from Maya and my Ornatrix stack showing the setup for the fluff balls. It looks way bigger in the viewport but in the final render, I used a transparent material so it is barely visible.
Rendering
For rendering, I used Arnold Renderer in GPU mode. This was my first time rendering something with GPU and I was really impressed with the speed of iteration that GPU rendering offers. One of the big things to consider with GPU rendering is that your project needs to fit into the memory of the graphics card, otherwise it will not render. I am primarily a game artist so making a few optimizations was not hard.
My advice for anyone who wants to optimize their scenes is very simple. Convert your textures to TX, reduce any objects with heavy geometry (heavily decimated geometry eats up a lot of memory), really consider the size of the textures. Some people use multiple UDIM textures for a single head but unless the project requires insane close-ups, a single 8K texture and a tiled microdisplacement is more than enough.
For the lighting setup, I like to use lights and sculpt the light around my character. For this project, I did not use an HDRI map and all of the light is coming from Arnold Area Lights.
Post-Processing
I find that renders that are overly sharp have that fake CG feeling because in reality even the most expensive and high-quality camera lenses still have some softness and distortion. If you look closely at movie frames of 4K movies you will see that the images are quite soft and noisy even in the focused areas, so I try to mimic that in my renders using DOF.
To have precise control over the DOF distance I like to use a Distance tool and constrain it from one end to an unrenderable sphere and the other — to the camera. And then I connect the output distance of the distance shape to the AI Focus distance. After it is all done I can move the sphere around or use Snapping to pinpoint exactly where I want the focus plane to be, it could also be animated so it gives a lot of flexibility.
The dust in the background was added to emphasize the sense of depth in the scene as well as some imperfections. To create the dust, I used a very simple method: I created a simple primitive in ZBrush and used the Shake Hook brush to distort the shape. After that, I made a bunch of copies around my character. The material was a simple Arnold standard material with very little opacity so the dust is really subtle and not overpowering.
Duration of the Project
All in all, it took around half a year. It sounds like a lot but actually, I only work on my personal projects during my free time so sometimes I work on it only a couple of hours a week. I sometimes struggle to force myself to make progress on my projects so my strategy is to try to make small improvements and changes no matter how small they are (sometimes it is literally as small as opening the file and adding a button or a stitch). It is a bit like saving up spare change, eventually, I make enough small changes to amount to some significant progress, and then it lifts up my motivation to finish it up.
Kestutis Rinkevicius, Principal Character Artist
Interview conducted by Arti Sergeev
Keep reading
You may find these articles interesting