The team of RealtimeUK shared some details behind the production of an outstanding Game of Thrones: Winter is Coming trailer.
The team of RealtimeUK shared some details behind the production of an outstanding Game of Thrones: Winter is Coming trailer made for Yoozoo Games.
About RealtimeUK
We’re a VFX and animation studio with 2 locations in the UK. Our specialization is Cinematic trailers for games, such as SMITE, War Thunder, Jurassic World… But we also have a world leading Automotive Department and we’ve recently had a big break into TV, which should be on screens very soon. Basically, we love creating visually stunning content for all audiences.
Game of Thrones Project
We are all massive fans of Game of Thrones, so this project was a dream to work on. That came with huge pressure, as we know how well known and loved the characters are all over the world – so we had to nail the quality and create a truly authentic GoT experience for the viewer.
Our task was to introduce the viewers to the official HBO licensed game and create a trailer that was authentic to the TV show in every way. The message of the trailer is the impending threat of ‘winter is coming’ and how that links the key houses from the show as they prepare for what is to come. The trailer is split into two sections, one literal showing the characters in real locations from the show and the other more abstract showing the Sigils that represent the houses being overcome with the frost of winter. We decided internally that the raven was going to be the thread that weaved through the two separate sections and illustrate the journey of the message of war. The trailer was designed to be a teaser for the epic battles that would follow!
This was the first time we have been tasked with creating CG versions of well-known characters, and there is nothing like jumping in at the deep end! And to top of the complexity of the task, we knew from the start that we would not have access to the actors or any scan data. So, the search for reference began, the client provided some good onset photography which was especially helpful for the clothing and apart from that the whole team scoured the web for any material we could find. The main challenge that arises from using multiple sources of reference is variations of the camera lenses and lighting. It’s amazing how these two factors completely change the perception of form in the human face. We would always find the camera lens data in any reference we used and do test renders with CG cameras to match the references.
However, we found that the characters were still not instantly recognizable. After multiple iterations, the amazing character modeling and lookdev team had created these great assets but again, we found that we lost the likeness in the shots from the animatic. We then realized we would have to go back and adjust all the lenses and camera positions to mimic the cinematography used in the TV show. That was the only way we could get that instant read on the characters we had been looking for. This was the tipping point for the whole thing came to life and we knew all the amazing lookdev would find its way into the final trailer.
Software Used
The main pieces of software we use for character work is 3ds Max, ZBrush, Mari and Substance Painter, then the groom is done using Ornatrix. For the simulations and other FX, we use Houdini.
Rookery Interior
One of the challenges was that we had little reference as to how the rookery should look. So we tried to match other Winterfell sets from the actual show, making sure it’s all cohesive. Our modeling and texturing team really went to town on creating several props and scattering them around the scene. The purpose was to give a feeling of an old, dusty, cluttered interior, where a Maester would spend his time sorting out correspondence.
Prop Production
Props are textured in Substance Painter as it allows us to get a result out sooner and iterate faster. Characters are done using a combination of Substance Painter and Mari. Painter is used for the elements that don’t span multiple UDIMs or have real life seems so they can be separated naturally and Mari for large continuous meshes like skin.
After gathering all references, we were able to break down all the scenes from the animatic into smaller pieces like pillars, chandeliers, chair, table, walls, etc., then assign them to the artist. Usually, one artist will make one asset from tip to toe. Unlike many bigger companies where jobs are divided between artist by job phase: modeling, texturing or shading. In most cases, modeling has been done in 3dsMax and texturing in Substance Painter. Sometimes, we use Mari for bigger surfaces where it’s required to easily paint over different UDIMs.
Tweaking the materials for final render is always easier in Max than relying on the output of Substance only. The interactive render ability of V-Ray was a great help here.
Certain assets needed special attention like the iron throne or candles where traditional workflow would have been too tedious or just not giving a good enough result in the given amount of time. For example, instead of traditional modeling or sculpting for the candles, we used particles to get a realistic base which was then polished to get a nice set of candles in the end.
Character Likeness
The likenesses were all based on looking at photography as there were no scans of the actors. We would constantly go back and forth comparing the sculpts until they were at a level we were happy with. We would find photos with the camera data embedded and match that data in our scene to help us judge where things needed tweaking. Once we were in a good place with the proportions we would then go to Mari and do all of the texture work using texturing XYZ displacement and albedo maps.
The main challenge was creating the actual likeness. With no scans to go off, it was a constant process of making changes, compare the renders to photography and make further changes to the sculpt, changes to the camera, changes to lighting. It was a constant guessing game of what needed to be changed in order to make the character more believable.
Rendering & Comping
The show was rendered in V-Ray with the hair using the VRayOrnatrixMod to generate efficient render time on the hair strands. It allowed us to achieve a more natural result out of the box.
To be able to work on one scene by more than one artist, we referenced all the elements of the actual set into one scene except the walls and grounds. This was a combination of XRef objects/scenes and V-Ray proxies. This way it was easier to handle bigger, more complex scenes while maintaining the ability to update them with the latest change. Plus, the hardware footprint of the scene was much friendlier.
V-Ray’s support of Cryptomatte was critical in allowing us to control and fine-tune every element of the shots in post-production. The progressive renderer was useful in achieving quick local renders before sending to the farm. We’ve also used IPR in helping with setting up lights and making sure the shadows were cast exactly where we wanted them to.
Photorealism is always the goal in projects like this Game of Thrones trailer, and to reach that takes time. Nuke gives us a lot of power in how we manage shots. It allows us to dive deep into fine-tuning minute aspects of the renders without getting lost in the comp.
We use multi-channel EXRs for each pass. For example, there will be an environment EXR sequence and a character EXR sequence. Each contains multiple render elements within that we then access in Nuke for the compositing process.
Chris Scubli (Lead Artist): I like to keep my comps in the RGB channel, so from each read node, I will shuffle any render elements I need into RGB and then merge those into my B pipe as necessary to adjust things like reflection, light selects, or to add fog based on world/camera data.
I always have a B pipe, and everything gets fed into that, with A pipes coming in from the left, and masks for merges or various nodes coming in from the right. I try to keep everything tidy and labeled. I find this gives me instant feedback on what’s going on at any place within the comp. I can then focus more on iterating the shots instead of navigating an ever-growing comp.