logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Recreating the Magic of Rivendell in Houdini, Katana & RenderMan

Fady Kadry talks about how an insanely unrealistic bet turned into more than a year of hard work using a home studio equipped with industry-standard software and explains how the Rivendell project went from a bunch of references to a stunning picturesque animation step by step.

Introduction

My name is Fady, I am the Head of Build at DNEG Montreal. Fun fact, I studied Classics at the Ain Shams University in Cairo, a completely different field than what I chose to do for a living. 

I have had the pleasure of traveling around the globe, working for the biggest names in the industry. I started my career as a freelancer in Egypt, I did some modeling and texture painting for TV commercials, TV animated series. My first professional step was at AROMA Design & Solutions in Giza, Egypt where I started my actual career as a generalist. I have done modeling, texture painting, look development, and some other things like lighting. 

I had the pleasure of working in feature animation in Barajoun, Dubai. After that, I had the opportunity to work for DNEG London as a Generalist TD. I also worked at Weta Digital as a Groomer/Modeler for such shows as Valerian and the City of a Thousand Planets, Justice League, Alita: Battle Angel, and other shows. 

I then joined Scanline VFX in Vancouver as a Groomer/CFX artist where I worked on shows like Justice League, The Meg, and briefly on Black Panther. Later on, I had the opportunity to join the extremely talented team at ILM Vancouver as a Modeler (where I have done hard surface and organic modeling) and Groom for projects like Bumblebee, Aladdin, and Overlord. 

Then I worked at Method Studios Vancouver and Montreal as a Lead Groomer and later was promoted to Head of Groom where I worked on such shows as Aquaman, The Christmas Chronicles, Bloodshot, Men in Black: International, and the remake of The Witches. 

This journey led me to where I am today, I started at DNEG Montreal as a CG Build Supervisor in such shows as Ghostbusters: Afterlife, Infinite, and that marked my last hands-on show (at least for now) as I have transitioned into the Head of Build, where my responsibilities are about building a strong team that can tackle complicated asset work from modeling all the way to groom. 

Working on the Rivendell Scene

I am a huge fan of The Lord of the Rings. My first groom demo reel was all about hobbits. 

But the Rivendell project started with a bet with my friends (which I ultimately lost) that I can re-create this concept in just one month!

Later I decided to change this concept as I wanted to create a nicer composition.

Apparently, I overestimated how many hours there are in a day after finishing my full-time job as a Head of Groom at the time. Re-creating Rivendell was always a dream of mine and it seemed like a good challenge and also a great source and motive to learn new aspects of the industry that I had never tackled before like water simulation, pyro simulation, scattering, and USD. What made me proud the most, is how long this project took and how determined I was to finish it. 

Working with reference was quite easy, I have a long modeling background. So I could read and understand how the different buildings were created and connected with little information that you can extract from the Internet-collected references (which sometimes is the case when working in the industry and references are not provided).  

I managed to install CentOS 7.6 as my operating system, which gave me the flexibility to install and modify some of the major DCCs that we use in our day-to-day life at any studio. 

Also, having CentOS as my operating system allowed me to build, install, configure and connect my storage server to my system using ZFS filesystem and NFS via 10GB network interfaces to allow me to transfer huge files at a reasonable speed. 

My home setup is the following:

1) Main workstation

  1. 2x 2698 Xeon 80 threads
  2. 256 GB ram DDR4
  3. RTX titan
  4. multiple SSD for system and cache drives
  5. EIZO CG24 

2) Storage server

  1. 2x2620 24 threads
  2. 128 GB DDR3
  3. 60 TB SAS 

A major part of the project plan was creating a reliable and highly effective home pipeline that would tie everything together into a whole puzzle.

A key point of this project was to utilize USD (Universal Scene Description) from Pixar as a bridge between Houdini, Maya, and Katana. Which at the time, I had to manually compile. After a good deal of sleepless nights, one day it worked. And that day will always be in my memory. 

Then came the time to install all the pieces that would help me get to the finish line. I decided to work with industry-standard tools such as The Foundry toolset (Mari, NukeX/Studio, Katana), Houdini, Mudbox, ZBrush, SpeedTree, UVLayout, RV, Modo, Substance Suite, Pixar's RenderMan.

Blockouts and Modeling

The goal of this project was to imitate a production workflow. I started by blocking out how the scene would look with simple shapes. Lighting the scene in a blocking stage played a huge role in dictating the overall complexity of the assets, and having the light in the scene while creating the camera position and animation gave me the opportunity to visualize the scene at an early stage. 

I used Houdini for the blocking stage. Also, I started to use Houdini to model the main buildings which was quite an adventure. 

For the buildings, as mentioned above, I utilized Houdini's powerful procedural nature while creating the roof tiles. With the help of the Internet, I found some different ways in creating a brick wall setup that later helped me to utilize the same approach but with pre-modeled roof tiles. 

The same goes for the ornaments (as it is elvish architecture), I followed the same approach as the one mentioned above, but this time, I took a lot from the traditional modeling approach. And as I mentioned above, the blocking stage allowed me to save efforts when modeling and not to overwork a single area as it would not be that visible from the camera distance I chose for the shot. 

The cliffs were quite the adventure. For the first time in my career, I decided to dive as deep as I could into procedural modeling. After looking into multiple references for the cliffs from different European regions, after a lot of hours scavenging the Houdini techniques on how to create procedural tools to create cliffs, I came to the realization that I could do it! Which led me to create the cliffGenerator v01. 

CliffGenerator v01 allows me to draw a simple curve as a profile for the cliff. The tool then duplicates the curve to a pre-defined height, lofts it, and then extrudes the volume to a pre-defined depth. After these steps, the tool then scatters the points with shapes (you can create shapes or leave it to the default, cubes with Mountain SOP applied to them) with different orientations, then a Boolean subtraction operation is applied between the cubes and the cliff and that gives the organic final shape like the one you see in the image below. 

Another tool I have created is the layeredRock generator. This tool is based on a tutorial but with some modifications to give me the result I need, which is the layered rocks with all the different elements that I have in mind. The resulting geometry is then taken to ZBrush for a remesh process to clean up and give me a quad geometry that later I can subdivide in render time in case I am using any sculpted details based on a Displacement Map. 

I used Mudbox for my experiments with Displacement Maps, the reason behind my preference towards Mudbox over ZBrush is that I can easily use it in Linux (CentOS) and do not need to install a VM to launch ZBrush.

Working on Vegetation

Vegetation was a great deal of fun to do. I used the Houdini Heightfield Scatter toolset with some modifications to be able to export USD-compatible render time scatter system for later use in Katana and RenderMan. 

I started creating the pines in SpeedTree 8, where separating the canopies and the barks allowed me to easily create varied material based on the Cd attribute exported from Houdini and included in the USD file format. 

In total, I had 14 variations of trees, three different sizes (Large, Medium, Small), I also had around 6 variations of shrubs. 

Thanks to the power of For-each loops in Houdini, USD prototype instance node in Houdini (which assigns a unique path to each primitive), the result was exactly what I expected, I got a really well-organized scattering system as shown in this Katana scenegraph.

With a solid scattering system in hand, creating grass was a breath. With 6 different grass bundles varying from small to large, I had the flexibility to quickly lookdev it and have fast iterations. 

Another tool I had the joy to create is shrubsScatter v02. I decided to use the power of Heightfield Scatter, and after a lot of trial and error, I managed to use the Heightfield Scatter node with native geo, not the Heightfield geo. That, by itself, was a huge breakthrough to me as it allowed me to carefully place the shrubs where I wanted them. I definitely stole some of the shrubs from the previous tool.

Texturing the Scene

Let us split the scene into 3 different categories: buildings, mountains, foliage.

For the buildings, I followed a simple yet structured workflow. I had to finish an entire building texture work to be able to set a template. With this template, I could then streamline the texturing workflow for the rest of the building with ease, from base color mask IDs to exporting the proper color space to assure uniformity. 

For the first building, I had a challenge to overcome which was the roof tiles. The roof tiles in the concept art had a unique pattern to them which was quite visible and added a lot to the characteristic of the building. With references collected from the Internet to images from the set of Rivendell (by Weta Workshop), I started to carefully color code the tiles in an arrangement that would later enable me to utilize the colored vertices and bake them in Substance Designer Bake Mesh tool to create a Color ID Map that I could later use in Mari with the help of Color to Mask node. 

I decided to use Mari’s powerful node graph system as a basis for this template I was creating for the building as it would be easier to transfer it between different Mari projects. 

I started by focusing on finishing the wood and then stone aspects of each building. As I had carefully UVed these buildings by splitting the wood/stone into different distinct UV shells, I could focus on one material at a time.

Once I was satisfied with the result from the first building, I took the same template and used it as a starting point for the rest of the building. This way, I successfully streamlined the texturing workflow for the buildings and finished them in a record time.

For the mountains, I followed the same technique, but this time, I used Substance Painter and smart materials. I layered some materials I gathered from Megascans into a smart material. With the help of the fantastic particle brushes, I was able to create a rain damping effect as a blending mask. Again, streamlining was the key to a fast turnover of assets in this project.  

For the textures of the trees, I have done nothing! This will be explained in the look development section. I will start the lookdev discussion with how did I created this look for the trees, and how I gave them this fall feel. 

As mentioned above, I haven’t done any texture work. I relied on the power of RenderMan. With a PxrSurface shader and some blend nodes, PrimVar (Cd attribute exported from Houdini) nodes, and some tweaks, I managed to achieve a varied look. Here is a screengrab of the material setup inside Katana.

I used the same setup for the shrubs, grass, and all the foliage in the scene with tweaks to suit the placement.

Rendering

Part of the plan for this project was to use Katana as a look development and lighting tool. Katana is an industry-standard tool that makes look development and lighting processes enjoyable. 

Here is a screenshot of my Katana scene setup, I used the Live Groups approach. Live Groups is an approach where you create a group in a Katana scene that includes all the geometries and materials and then load this group into a major Katana scene where you will assemble all your Live Groups and create the final setup for your scene. 

My major Katana scene contained nine Live Groups as seen in the image below. 

This scene also contains Cryptomatte setup (orange backdrop), Utility passes (green backdrop), and render settings (blue backdrop). 

For the render settings, as I was creating an exterior scene, I used RenderMan's PathTrace. I saved a lot of time as I did not have to crank up the sampling to get a good enough image. If I were in the studio while rendering this scene with a render farm in hand, I would have increased the sampling a little more to avoid noise, but, I decided to use 128 samples, and the result was good enough for what I needed. 

Another trick I used (thanks to Remi Bitawi, Lead Lighting Artist) was rendering the volumes separately and utilizing RenderMan's fantastic denoise function, where I rendered the volumes with only 32 samples overall and ran the denoise function that gave me a great result in 6 minutes a frame. 

For the water sim, that by itself was a great adventure. Multiple tutorials and countless hours of search, testing a waterfall from start to finish (from creation to rendering, here you can see an early experiment). 

I created a total of six waterfalls, these waterfalls have whitewater setups included in them as well as volumes for mist. 

A piece of advice that I heard in one of Jeff Wagner’s workshops is the following: "Use height fields as colliders, they are extremely fast to calculate" and that was absolutely true. With this approach, I was able to create the major lake waterfall as seen in the image below. 

Post-production for this project was quite limited. The key areas of focus were elimination of some popping volumes, adding a glow/bloom effect, doing a sky replace, and working with Zdepth correctly. All of this is credited to my brother, Mina Kadry. Without his help, I would have taken longer to finish this piece. 

In the image below, you can see a screengrab of my Nuke script. 

Conclusion

This project took me around one year and seven months to finish! The actual hours spent on this project come to a whopping 1320 hours. Balancing full-time work (plus overtime sometimes) and life responsibilities, I found myself able to work a couple of hours a day and on the weekends maybe 5 to 7 hours a day, some days not at all. 

The main challenges I faced while working on this project were getting USD to work properly between all DCCs used in this project, getting the water simulation done, and finally, the render time. As mentioned above, I had to render everything at home. Meaning, my local workstation had to be occupied for extended periods of time during render time which delayed me from finishing other aspects like comp for example. 

My advice to any artist, Environment Artists, FX Artists, Character Artists, anyone actually is to never give up on your project. It might take longer to finish, but in the end, you will gain a lot of knowledge from the mistakes you made, the correct things you did, and all the tutorials you watched. 

For the Environment Artists specifically, start slow, do not rush to the final product while you still need to block out your scene. Always have a temporary light setup in hand and a camera in your scene if you can. It will help you a lot in determining where you can save and where you have to spend your time and effort. 

Hope this article sheds some light on my approach to finishing this project. 

Fady Kadry, Head of Build

Interview conducted by Theodore Nikitin

Join discussion

Comments 1

  • Anonymous user

    Love it ! amazing

    0

    Anonymous user

    ·3 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more