Mariano Merchante shared his process of recreating New York City Police Headquarters from initial modeling in Maya and Houdini to lighting setup and rendering in Redshift.
In case you missed it
Study more scenes with architecture
Introduction
Hello there! My name is Mariano Merchante. I’ve been fascinated with computer graphics since I was a teenager, and have dedicated the last 15 years to learn its tools, techniques and underlying concepts. I’m deeply interested in both sides of the coin, trying to learn as much as possible in terms of both technical and artistic elements.
I studied software engineering specializing in image sciences in my home country Argentina, worked as a mobile game developer/technical artist for some time and then came to the US to get a computer graphics master’s degree. Since then, I’ve been focused on real-time graphics for different technologies/platforms for various clients.
New York City Police Headquarters
The backstory of the project is pretty straightforward actually! I was staying in New York for a work-related project, and while walking around Little Italy at golden hour I suddenly looked up and saw this beautiful dome being perfectly lit by the sun. In a way, I wasn’t expecting it because the adjacent buildings didn’t have anything notable, so the contrast and surprise made a huge impact on me. I immediately picked up my phone, took a picture with the best composition I could, and decided that someday I would model it.
Modeling the Building
This was my third building reconstruction, so I approached it in a similar way as before. I prefer blocking the geometry, camera/composition, and lighting as soon as possible. This step is very coupled; moving the camera affects the lighting or the geometry, so I try to be as flexible as possible. It also lets me discard geometry I know I wouldn't need. Something I really enjoy is playing with offscreen shadow casters to build the shape of the shadows and occlusions before I get into the details and actual modeling. Lining up the camera with the building in a similar way to the reference can be tricky, but fortunately, I had info on the place, the time, and some of the camera properties like focal length, etc.
From there, the next step is finishing the “rest of the owl”, focusing on primary, secondary, and tertiary shapes in that order, but in different sections of the building, bottom-up. This lets me iterate these parts in an isolated way so that I can prevent repeating mistakes and make the process faster. An example of this is the cornices, where I initially mocked up an “ok” model but didn’t really lean into it after finishing the dome’s cornices. Every new cornice I modeled brought new insight. As for details, a lot of work went into beveling at the right scale to get nice edges and adding cuts to the geometry to imply the shape of the building’s construction blocks.
Because of my approach, every time I jumped to a new section of the building I already had iterated some geometry that I could reuse. At that point, I evaluated if it was good enough or if it needed more work. Depending on that, I could backtrack those changes to the older models too, to keep consistency.
As for the pure repetitive elements, I heavily used a plugin I wrote for Maya a couple of years ago that instances objects along a curve. For this, I also had to build proper construction curves that matched the dome shape, which proved to be trickier than I expected! Other elements and sections were manually instanced, like the columns, vases, etc. As usual with Maya and weird instanced scene subtrees, I had a lot of crashes!
All these elements were instanced with instanceAlongCurve:
Challenging Pieces of Architecture
Apart from the challenge of finding the proper interlocking of all shapes, the Corinthian columns were the most problematic! I initially tried to model them in ZBrush to emulate a real sculpture of them; but it proved too time-consuming considering it was a small detail overall. I ended up modeling them in Maya. A similar thing happened with the statues: I started manually sculpting the cloth but it took too much time to get somewhere where I was happy, so I ended up simulating them in Houdini with Vellum. I reused the base female model from a previous sculpting project I did. Finally, there were some issues when modeling the dome’s lower section which required tricky lattice and bend deformations.
Materials
I’ve always been a fan of Neil Blevins’s work on materials, particularly when it comes to rendering thousands of similar elements. There are basically two or three procedural materials that get heavily reused (stone and glass mostly) so there was put a lot of work and patience into making these materials as flexible as possible to reduce repetition.
In a way, my smallest “shader block” is almost always a 3D fractal noise and a remap function. This, coupled with different scene aware signals (random colors, vertex paint with each channel driving different effects, curvature, ambient occlusion, normal direction, round corners, etc), can drive many different phenomena like dirt, smoke/smog darkening, water leakage, weathering/chipping, etc. In particular, I found that using curvature nodes with a noise-based radius can generate very nice weathering in Redshift, and if you layer a couple of these elements you get nice results.
As an example, the dome has a vertex color set that modulates the noise. The noise simulates material corrosion/deposition, but with specific patterns that imply there are different construction blocks when in reality it’s just one mesh. Another fun trick is adding subtle noise deformation to the windows, which are never perfectly built, and making them diffuse toward the edges based on curvature/AO. Similarly, the stone’s base bump has a very subtle Voronoi noise that breaks up the monotony and gets added to the other stronger bump noise signals.
A lot of this is very similar to how one would do it in Substance Painter or a similar tool, but having to create nice UVs and baking textures for around 21 million polys would take too much iteration time (and memory!).
Lighting
The lighting is something I thought was done after the blocking phase, but when I got to finish the modeling and materials I realized there was very little contrast and too much saturation, and the eye would get lost in this giant orange blob of a building. I played a lot with the offscreen blockers, giving them some tint, and added some lights to accentuate the volume of the dome. Additionally, I added lights to hint at something more than just this building from the street level: either car or window caustics. This is something that I really like seeing outside: weird light bounces that make no sense until you look into them, which I tried replicating. There’s also a very subtle depth of field with a chromatic aberration bokeh, but I made sure the building doesn’t look like a toy.
Post-Process
After all that, I went into Photoshop for general post-processing/color correction, where my focus was on bringing up the blues and generating a more stable look. As for the background, I purposefully looked for clouds from HDRI Haven that would be perpendicular to the building’s axis (and aligned with the viewer's eye movement). This, I think, adds to the dreamy look and improves the composition. I also rendered a volumetric scattering pass to add a bit of haze and scale to the scene.
Rendering
Honestly Redshift is just amazing as it reduces the iteration times drastically. Because my workflow is not really texture-based, I rely on the renderer’s efficient evaluation of the procedural nodes, which it seems to have no problem with. It also has a lot of very useful shading utilities, which simplify the construction of complex signals to use on the materials. I’m amazed at how good and fast it has become in the last few years.
Afterword
Thanks for reading, and I hope this will inspire you to look up while walking on the street!
Mariano Merchante, Engineer & 3D Artist
Interview conducted by Kirill Tokarev
Keep reading
You may find this article interesting