logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Working With Camera in Environments

Fredi Walker did a breakdown of his great urban environment, which mixes intricate details with beautiful camera work.

 

Introduction 

I recently graduated from Portsmouth University in the UK, which was really fantastic in getting my skills from 0 to where they are now. Currently, I’m freelancing and working on personal projects on the side to bulk up my showreel. With most of my work, I focus on lighting and trying to capture a certain mood or feel for an environment.

About the scene

This scene started a couple of months ago as I was taking my daily dose of concept art and stumbled upon a fantastic image which inspired me.Unfortunately I can’t remember where this bit of concept art came from. I did the basic blockout in a frenzy, in one day, plopping down all the basic shapes, looking back and forward from the concept to my 3ds max, drinking a lot of coffee just to try and push as much as possible in one sitting. Finally, after hours of modelling I was finished, at something like 4 am in the morning. So with severe eye strain from sitting 2cm from my screen and with chemically fueled wakefulness, with one eye open and one sleep I looked at my scene. It was a frankensteinian-esque disaster.  I closed 3ds max and went to sleep.

If I had this image with me today, I’d show you it and tell you how not everything translates well into 3d, alas, I do not have the image, but I’m going to tell you anyway, “not everything translates well into 3d”.

Anyhow, I didn’t make that scene but I did begin the groundworks for something else. I set about re-modeling everything and playing with forms and composition, I still wanted to have many elements of the initial image but, a dense urban setting, but I wasn’t sure exactly how to lay it all out. I had to do a bit of concepting myself, and there are two things I kept in mind when doing it.  Blocking, or layout, or whatever you might call it, is perhaps (in my opinion) the most important aspect of 3DCG, maybe even any artwork. Why? Because you’re playing with the biggest spaces on your (virtual) canvas, you’re filling up 90% of the scene with space right from the get-go (be it positive or negative space) and if it looks bad from here, then nothing else will fix it. Pretty textures or a nice shader will not fix bad composition and will only act as window dressing. So that is rule number one that I needed to get right, rule number two is getting correct values. Of values, I learned from various youtube art tutorials, namely Sinix and Sycra Yasin (both are amazing art instructors), when I was really into digital painting a couple of years prior. Value is for lighting, as the composition is for modeling. So what is the value?

Value is the darkness or lightness of a pixel, so when I say values i’m referring to the balance between light and dark in an image. Now for instance, if you take an image from any prominent photographer and turn it greyscale (if it isn’t already), you will see very clearly, without the distraction of colour, how having the right balance of dark and light is key for a captivating image. What you might see in a good photo are areas of low and high values separated like continents, if you squint, or take a step back, you can still tell what the image is; it doesnt lose itself in a sea of noise. Maybe to illustrate it a bit better, I can say that having very contrasting values close to each other will appear too distracting and will make the image look noisy. So composition and value are two things I think about before anything else, all the time.  They are not the only rules, there are many more, but just the ones I keep in my mind right at the start.

When I finished blocking out the apartments, I wanted to think more in detail about where exactly this place might be, and I was inspired by Kowloon Walled city.  A city home to an entire underclass of people in Hong Kong, crooks, opioid addicts and the like. Before it was demolished in 1994, it housed 50,000 inhabitants in 0.02 square kilometers. I wanted to recreate the dense, dirty urban dwellings, something that might be seen as you’re passing by on a train and wondering, “I wonder whats there?”. Because of the dense dwellings, it occurred to me that you could probably see the lives of  many of these people just through one snapshot,looking through their windows,  and whilst no one might be present, you could see how the people lived through their personal objects.  As a final thought, I wondered what it would be like to live on the border of such a city, one house being in an opulent apartment  whilst the other being on the opposite. This is what I had in mind when building this image, I’m not necessarily fully satisfied with how this message is conveyed but I think it’s mostly there.

Modeling

I used 3DS Max for modelling, because it feels much more intuitive and user friendly for when you care just about modelling and want to focus just on that, and Maya for shading, rendering and for technical things like cloth sim, because it seems to me this is where it’s strong suit lies. As I already mentioned I began by blocking the scene out and making sure everything looks good in front of the camera. Putting down a camera and finalising a focal length and  aspect ratio are the first things I do, so I can position everything perfectly for the shot. This means that a lot of my shots break down if you begin moving the camera, but obviously it’s much faster if you only model for what you need. Nothing was a challenge to model per se, the challenge was the quantity, so once the blockout was done, I spent a good three weeks modelling stuff on and off. I tried to prioritise the quality of models and textures based on distance and focal points. However, there were points when I needed more stuff because certain areas seemed too vacant, or some objects seemed better rearranged, so modelling was a  continuous process right through the start, till the end. In terms of construction, everything was made using standard poly/box modelling techniques. One area where I tried to improve a lot is in creating realistic chamfers on edges. All objects need chamfers on edges, because nothing in the real world is at an infinitely precise angle, that is a given, but it’s not enough to chamfer, you need to do it accurately. So what I was trying to improve upon is getting a precise chamfer based on the type of material and construction of the object, where as in the past I would not pay too much attention to that. Of course, to save time you can forego this if the object is far enough from view.

Another technique which I’m working on improving is deforming objects very slightly, as it helps in removing the cg static look at can be quite common. A slightly bent pipe or a naturally misshapen fence makes the scene look much more interesting and helps to remedy staleness. I feel like I could have gone way further with this put that’s for another time.

One interesting thing to mention the making of street signs. For the first time ever I was able to make use of 3Ds Max’s text tool, which simply creates a spline out of any text you want to type in, which, with some extrusion made convincing neon tubes. Very cool.

The main problem with having this sort of setup is getting a good amount of illumination in internal areas and not over/under exposing, that took some tweaking with lighting intensity to achieve, it was helpful to render out lighting passes and just adjust various lighting contributions until I got the balance just right. Another issue was laying out assets so as to make everything look natural but also work well with the outdoor/indoor composition. And for that, I just relied on a lot of tweaking from a camera perspective to make everything fit exactly where it needs to be, I think I honestly must’ve spent an hour just adjusting the interior elements to a millimetre to fit the framing of the window.

Working on the camera

All the cool effects you see were made in Foundry’s Nuke, I try to delegate tasks to compositing to speed up rendering. Working in comp can also give you more control than 3d and allows you to ‘fake’ optical effects which may not occur within your scene as a result of raytracing, such as the train reflection on the glass, when there’s no train in the scene. Going into comp I used Arnold to render out  diffuse, specular,emission, transmission and depth passes so I can tweak the scene later without having to worry about re-rendering (but I had to anyway). 

Making raindrops on the glass was probably the most enjoyable part of the project because it’s something I have never tried before. I did not want to raytrace the entire scene through a glass pane in front of the camera as rendering and lookdeving glass,rain,dirt,smudges etc would have taken far too long on my machine. So I decided to take all of that into comp.  To make the raindrops I found Cornelius Dämmrich’s 8k, 32bit raindrop displacement map and used it to displace a plane in front of the scene’s camera, after rendering the 3D raindrops, I was able to get a normal pass from them, which store surface normal vectors. 

Nuke has a flashy node called ‘iDistort’ which allows you to distort an image based on an input of a UV vector, this can be used to make river flow, waves, heat distortion and even fake refraction, this is exactly what I used it for. I didn’t expect it to work so well but it did. Next, as this is ‘fake’ refraction, I needed to build up the lighting a bit. I added a specular component by using a relight node, which takes in a normal pass, a 3D camera(imported from the scene), a shader (just specular in this case) and two lights, one blue tinted, one slightly golden.

Now blended with a screen on top of the main comp, it gave the raindrops a nice sheen, as if  street or neon lights were hitting them on the rim.  The rain was getting in the way in some places so I also merged a blue grunge map on top of my normal pass to clear it off in some places. To add a little continuity I also added some outside rain effects around the volume lights, as we might see in real life.

To get more control of the directionality and streaking/ motion I used a combination of noise and a directional blur. This was done by simply generating some noise and directionally blurring it and multiplying it over my volume pass. As a note, it’s important to consider how long your rain streaks should be. If you take a camera and shoot some photos at night you will most likely have very long exposure and/or wide aperture to get as much light exposed as possible, hence a long exposure will cause prominent motion blur on your raindrops (as seen above), with this in  mind I kept my rain streaking fairly long. I used only one layer of rain but to improve you might consider doing multiple planes of rain based on distance, or if your scene is not static a particle solution might be simpler.  

The glows that you can see are generated by blurring the emissive pass twice and then screen blending it on top of the main comp. The first blur is weak and tight for the inner glow of the lights, resulting from the over exposure of the incandescent element of the bulb, it has a high colour temperature, of around 3100-4500K. The second blur, which is more diffuse and larger, is for the outer glow of the bulb, with a lower colour temperature, appearing golden/red. Of course you can play around with the strength and colour of these as you might get particularly weak light bulbs that give a more golden glow. I used a glow node for this as it functions as a blur and grade in one making life a bit easier, but when using it be sure to check “effect only” otherwise it applies the effect to the entire scene. To complete the effect I threw in some lens flares.

One of the most important effects for making your image look realistic is a depth of field, this is where a lens focuses on a region in space, defocusing everything else. Note however, that defocus is NOT simply a blur, I used to make this mistake but it’s very obvious when youre looking at gaussian blur and a camera defocus. Nuke has a z-defocus node which allows you to plug a depth pass into it, and sets the focal distance for your image, allows you to adjust your depth of field, focal point, bokeh type and much more, it’s very nice indeed. Scale is very important for your depth pass, as it records your scenes distances as a floating point value, which can go from 0 to infinity, so if its out of scale it can screw up your defocus a little, and by a little I mean a lot. If you’re like me, incorrect scene scale can become a bad habit, and if it does, like me, you will have to fix your depth pass by normalizing it with a grade node so it’s much more manageable. So after some grading of the depth pass I was able to fix my depth of field. Finally I applied a vignette and some grain as a finishing touch, additionally you could apply lens distortion for extra camera realism but I did not see this as fully necessary for this image. 

Materials & Textures

When making materials I try and work as procedurally as possible. A lot of my materials are using either blended tiled textures or layered shaders with noise or masks to get a desired effect.

With the window panels for instance, I’m using aiCurviture as a mask between paint and the underlying wood in a layered shader.  Most of my materials have the standard diffuse, roughness and bump/normal maps and sometimes metallic where appropriate. I usually get these maps just from an initial albedo texture and plugging that into 2 or 3 remapValue/remapHSV nodes which offer hue, saturation and value colour correction and is pretty much a cornerstone in my material workflow; being able to edit textures without going into photoshop is very handy and allows me to be versatile. Once the basic surface details are covered I try to think about what extra textures I could incorporate, sometimes adding a little noise to roughness or bump can make a material much more interesting, though subtlety is important in this regard. When a model needs to be unique or is getting extra attention I usually go into Substance Painter  to do it by hand. Though, whenever possible I try to reuse materials. A nice trick to avoid repetition is to use the aiColorJitter node, which can give you random HSV values per face, object or UV,  you can use it for things such as random roughness or variations in texture tiling per object.  

Lighting 

I have seven lights in total in the scene. I started by putting down a skydome light with a HDRI, this was just a HDR of some forest landscape I found, all I wanted to get was some tonally different lighting from a sky, I turned down the intensity  to 0.02. The HDRI gave the scene a nice blue ambient light so the next step was to highlight areas of interest with some direct lighting.

Omni lights were used to light the interiors, and then Arnold area lights to ‘sculpt’ the scene, I placed them around to bring out the form of the buildings and objects. After rendering I took my lighting passes and played a lot with them, to get the right colour temperature and exposure. You might notice how the neon lights are not in the lighting breakdown, and that’s because they are just emissive materials with a glow/blur applied in the compositing stage. Generally, it was all balanced together in Nuke.  As it’s a night scene I had to increase the exposure of the lights a lot to make the scene really pop, and I’m quite happy how it turned out.  

Challenges 

I worked on the image on and off for about two and a half months, at the start I left it for a bit because I had to think about where I needed to take it and wanted to work on some smaller projects. The two hardest aspects of the image were: getting a good composition and the quantity of detailing. Whenever I thought I was finished I could always see an area which looked too empty, so it was a constant, iterative process where more and more objects were being added, even in the compositing stage I went back and changed some signs around. And as for composition, like I mentioned earlier I think it’s REALLY important so I tried to get it right and it was always on my mind, I saw the image as made up of many smaller images, like frames within frames. I had to move around objects within the framing of the right-hand side windows, and then the ones in the doorway, sometimes millimeter by millimeter. Anyhow, I think it came out alright in the end.

Fredi Walker, CG Artist.

Interview conducted by Kirill Tokarev.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more