Alexander Agredo told us how he used a combination of modern and traditional workflows to make the La Piazzetta project, discussed the texturing workflow in detail, and explained why adding movement is important for a scene.
Introduction
Hello! My name is Alexander Agredo, I’m a Colombian 3D Environment Artist for games and a recent Think Tank Training Centre alumnus.
I first got into 3D art because of my cousin; when I was kid, he was studying Multimedia Engineering, and I remember passing by his computer and looking at this 3D model of Link (from Zelda) since I was a Nintendo fan, I felt amazed, and that sparked in me a lot of curiosity for 3D art.
It wasn't until 2 years ago that I decided to completely dedicate myself to it. I decided to join Think Tank Training Centre’s 3D asset creation program in hybrid mode. This is a magical place where I acquired most of my skills in 3D art, and to be honest, it is one of the best schools in the world!
I decided to specialize in environment art because I fell in love with the idea of making players feel immersed in artistic worlds created by me or a team. The magic of environments is that they provide the player the experience of being able to explore or admire their surroundings, being able to tell a story just by admiring your surroundings is something I love! And I do it in my day-to-day life.
La Piazzetta
I chose the Bell Terrace concept by Quentin Stipp because I fell in love with its vivid colors and peaceful vibe. My mentor told me we had to change the left part of the concept because it could get really complicated in terms of perspective and make it more logical in a 3D world.
I didn’t know what to do, but then I discovered this new tool in Photoshop called Firefly, which is a generative AI (of course, this does not replace a proper artist), but it gave me the ability to iterate with different prompts to change the concept to something that made more sense in the 3D space. After quite a few iterations, I got this result:
You can see it is not perfect, but it gave me the idea of making this cozy Italian town, I don’t know why, but my mind felt transported to Italy. It has this traditional architecture and comfortable aura, but at the same time, this newness and magical feeling, like the bells and the red beam.
I imagined myself exploring the streets and finding this cozy plaza in summer, so beautiful that you don’t want to leave for the rest of the day. That’s the feeling I knew I wanted to transmit with this art piece.
References
For references, I like using PureRef. This is one of the most important phases of your project because is where you will get all your inspiration and guidance to make your assets and environment the way you want. Don’t be afraid of spending a good amount of time gathering references and ordering them (for instance, in this project, I spent 4-5 days).
My approach is simple: I usually don’t like having more than 3-6 references for each asset. I know there are people who collect this huge collage of references, but what I’ve noticed is that having 8 or more images tends to distract me, and I lose focus on why I picked those pictures in the first place.
I prefer having 3 to 5 images, but knowing exactly why you chose them and what you’re going to look at in the future helps a lot. There are some exceptions to this, depending on the importance or complexity of the asset, like the bike or the fountain, which were my “hero” props.
Something I also like to do is get references on the fidelity of the project (how I want it to look overall in terms of quality). I get references from other video games or portfolio works for this. For instance, good references for this project were Hitman's World of Tomorrow and Overwatch's El Dorado. I also like to gather references for the lighting and overall mood.
You may notice that I gather references from both stylized and realistic works, this is because I wanted to explore my art style more. I wanted to go for something realistic but still have some stylization to it. I wanted to be able to choose more artistic decisions, like exaggerating some features of an asset to make them look a bit “magical” rather than doing them 1:1 but still maintaining the realistic look. I call this “magical realism,” and it is inspired by the Nobel writer from my country, Colombia, Gabriel Garcia Marquez.
I recommend taking screenshots of all the assets and start gathering references for every screenshot. This is helpful because you won’t have to go back to the concept again and again, and sometimes you can lose focus on what you’re doing:
As a bonus, I also like to make these edits in Photoshop to better understand my scene:
- A black and white pass on the concept to understand the lighting value.
- A division in fore-, mid-, and background to understand the importance of each area.
- A modularity analysis (if applicable) to understand how I would divide the structures into modules so they make sense and work with each other.
- A composition analysis to figure out what the concept artist used and what he wanted the audience to pay attention to.
A color wheel analysis to figure out the color scheme and use harmonic colors whenever I need them. You can use Adobe Color for this.
Composition & Blockout
First, I try to use primitives to match the main shapes of the scene (the biggest assets), after that I try to match the camera angle in Maya so it looks like the concept.
After, you can start matching the shape of the main objects in the scene using primitives; remember, you’re trying to get the silhouette of the assets as close as possible, that’s the only thing that matters, so don’t start polishing or losing time making the objects in one mesh. What I do is create a bunch of primitives and combine them so I can get the shape right, I also don’t bother with naming at this stage. After this first pass, this is what I got:
Once you have the main shapes and the camera angle, focus on populating your blockout as much as you can (medium to small assets). After that, start working on making your modular pieces (walls, windows, and doors). This is what I had after the second pass:
After this pass, I imported everything into Unreal Engine and started a quick blockout lighting, this helps you to know how your assets are interacting with the light and if you have the right shape.
Once everything is imported and you have the light pass, you can continue your blockout in Unreal. Remember, we’re prioritizing speed over polish, so that’s why I downloaded some Megascans assets to blockout the foliage:
Something I regret from this stage is that I didn’t import the assets with a specific material/color to better understand the composition and to cut time when assigning materials to each asset.
Cameras
Something that surprised me was that my mentor told me to start working on my camera shots at such an early stage; at that moment, I couldn’t understand why, but now I think that that’s one of the best pieces of advice I’ve ever received.
This step is important because that way, you can start understanding what areas or things are more important for you to work on, it is way more efficient because now you know that if certain areas are not going to be shown or are far away, then you don’t have to spend unnecessary time polishing them. Also, it helps to start thinking about the storytelling of your project.
To set up my shots, I use this helpful video from William Faucher. I fell into the trap with previous projects to make my cameras capture in 60f FPS because that’s how games look, which is not bad, but if you want cinematic quality in Unreal, then you need to simulate how cameras work in movies like 24 FPS capture and using DSLR.
Assets
I used a combination of modern and traditional workflows to tackle this project. My mentor taught me to think outside the box, we’re in an industry that is always changing and updating, everything is becoming more procedural, and optimization in games has been changing its focus. So that’s why I decided to approach this project relying mainly on modern workflows but still showing that I know how to do traditional ones.
I used Maya, Substance 3D Painter, Substance 3D Designer, Substance 3D Sampler, Unreal Engine, ZBrush, SpeedTree, Marvelous Designer, RizomUV, Photoshop, and Premiere.
Modularity is such a crucial workflow, but if you don’t know the real purpose of it, it can be a problem rather than a solution.
If you’re doing something modular, it is because you’re going to reuse the modules (meshes) to create new buildings, structures, etc. It is a way of saving time if you must create variations from those assets. For “La Piazzetta,” I knew I had to use it because, besides the arch, I needed to make various houses that looked similar, but with some variations, they could share the same modules as walls or roofs but still have variations like the windows, doors, etc. I had the why.
Never underestimate the power of modularity (if you need it). In the final weeks, I had feedback that made me notice that my project lacked depth, and one of the ways that I resolved this was by creating an alley, which ended up being one of the most beautiful shots in the scene. This was possible thanks to modularity, all the buildings in the alley were quickly made through modules, by that stage, I had all the modules already modeled/textured, so it was just a matter of assembling the right pieces.
Before
After
After
The background buildings were also made in those last weeks, and it was fast because of modularity, which helped a lot with the depth of the scene:
Combining Traditional & Modern Workflows
These were the modern workflows I used:
1. Vertical Slice
This consists of taking a small, squared area of the concept and trying to finish it as much as you can for 3-5 weeks, which means textured and with all the shaders/materials needed. This approach is useful because it helps you with two things:
- Iteration: with this approach, you will make various vertical slices (I’ll shorten them to VS) until you complete all the concept art. It is normal that your first VS don’t come out in the quality you want. But I started to get better and better between every VS, and after the third one, I reached my quality bar, and every VS from that point on was the same quality. After finishing all the VS, I could easily reiterate the first 2 so they were the same quality as the others.
- Versatility: something that happens when tackling big environments is that we tend to get bored after doing the same thing for a long period of time. Let’s say I don’t like making foliage, and now, I have to spend 3 weeks doing that, I’m going to get bored and demotivated. This workflow helps to keep us a bit more engaged because we are focusing on this mini-environment within the big environment. In one VS, you could be creating tileables, modeling, texturing, and foliage all in one.
Here’s a good tip: if you don’t know where to start your project or if you ever feel lost on what to do, no matter the stage of your project, always go from big to small. Also, when starting a VS, push things to the bare minimum to be faster. Avoid making a lot of new things (reuse your tileables, meshes, etc). Here’s the first VS I chose:
2. Nanite workflow
I will talk more about this later.
3. Material Layers (ML)
This is an underrated tool inside Unreal Engine, and it basically has a version of Substance 3D Painter in the engine. ML allows you to blend and layer different textures in a single material using masks. This is very useful for making in-game changes for any asset. In my opinion, it is more powerful, procedural, and efficient to work with than just using master material with RGBA masks.
I created the RGBA masks inside Substance 3D Painter. For this, I created my own output channels and renamed them R, G, B, and A. Then, I created a fill layer, which was the background, with all the channels in 0 and black color. After this, you can create any fill layer and assign to it the channel you want the masks to be.
To create good masks, I would highly recommend using smart masks, there’s a good library inside Substance 3D Painter, but you can also explore some good libraries like the ones from Javad Rajabzade. I would also recommend editing these masks and making your own to avoid repetition and add your artistic style.
Finally, to export everything without failures, I used a custom export template in Substance 3D Painter, which you can see here:
What can be a bit tricky is setting up the Material Layer shader the way you need it. But after some iterations, I ended up with a material that:
- Has the option of using a high-poly baked normal map and a curvature map to project to the mesh;
- Control a lot of parameters of the tileable materials;
- Uses RGBA masks for layering.
4. Bonus: AI as a tool
I don’t think there’s space for a full replacement of art by using AI, that’s soulless. But that doesn’t mean that all AI tools are bad. AI, as a tool, can be beneficial for new ideas and workflows.
As I showed in the beginning, thanks to Photoshop's Firefly, I could make quick iterations of some areas of the concept art that needed new ideas, and since I’m not a 2D artist (yet), this was the perfect solution to, at least, have a vision of how my concept would look like with the ideas I had in mind.
There was a game changer when I was doing this project, and it was when I had to tackle the other buildings surrounding the fountain. This is what I had back in the moment:
This was hard because there is no clear vision of what goes around that area in the concept. I tried to use the modular pieces I had at that moment, but it was looking weird, empty, and boring.
This is where I say that: having the right mentor or someone who guides you, who you trust, is so important. I showed this to my mentor, and he immediately told me the problem: I needed more interesting shadows on my buildings, more interesting shapes on the roof, not just a rectangle, I needed to rebuild this area with more interesting shapes that cast more shadows, things like balconies, props, roof tiles, etc.
I searched for some references for this, but I also used Photoshop’s AI to quickly iterate between ideas. After many tries, I finally made something that looked interesting and that sparked this feeling of wanting to be there:
You can see the results are not perfect, that’s what AI does, but this was the perfect mock-up for me to test new ideas, like the store, the balcony, the flags, the sign, and the roof. This is how my project ended up looking after I applied those ideas:
Finally, I used AI to speed my workflow when making the rock floor material, but more on that in my material breakdown. I hope you get the picture of my message.
Foliage
I used these foliage assets: Boxwood, English Ivy, Lemongrass, Thyme, and Dead Leaves. I also downloaded different breads in the bakery store from Megascans and a small wooden chair.
Using Megascans can be a big help as long as it doesn’t take a lot of protagonism in your scene and you credit the work.
Even though I used Megascans to complement my scene, I also learned a lot about foliage creation and SpeedTree. There were 2 things that helped me to understand the workflow for creating good-looking foliage:
- The first one is this amazing tutorial from Dekogon, which explains how to use SpeedTree to create realistic trees.
- The second one was my foliage mentor, Sylvia Cheng. She explained the workflow not only for creating trees but also for other types of foliage, she also gave me amazing feedback and resources.
I searched for the scientific name of the plant I wanted to do, that way, you’ll get much better references:
I created an atlas for the tree branches using SpeedTree nodes, then cut out some leaves and flowers from a texture downloaded from Megascans, made 3 variations, and exported it.
Then, I created a mesh from this atlas using SpeedTree's “edit mesh” tool, and I added some anchor points so the program knew where to put the leaves.
Lastly, I finished the base branches in SpeedTree and added the meshes as leaf nodes, adding some flowers on top.
A similar approach was used for the plant pots. It was a challenge to get every plant the way I wanted using SpeedTree nodes only, I highly recommend checking out the SpeedTree YouTube channel, it has amazing tutorials on how to do some specific plants.
I made the flowers of the plant pots and the trees in a way that I could change their color and translucency color in the engine. This way, I had more control over them.
Nanite Workflow
One of the biggest misconceptions I’ve seen is that people think that just by clicking “enable Nanite” when importing an asset, you’re already doing the Nanite workflow; you’re just applying Nanite to a high-dense mesh. The Nanite workflow still uses optimization methods without losing quality.
The Nanite workflow is like the high-to-low poly workflow: you start by creating a base mesh, then import it to ZBrush, create a high and low poly version (I used decimation for this), and then do UVs for the low poly.
The advantage of this is that your low poly will be way more accurate than the high poly because it will still hold a good number of polys. So cavities, convex shapes, and sculpted detail will react way better to light than the traditional approach. Here’s an example with one of the wooden windows:
Making UVs for decimated meshes can be tricky (a lot). I made UVs for the base mesh and then used the “Transfer Attributes” option in Maya to transfer the UVs in a world-aligned position. If this didn’t work, then I would use RizomUV to manually make the UVs.
After this, I softened all the normals of my low poly (because when baking, you need the LP to be shaded as a “sphere” so the projection can be better calculated; if you have hardened edges without a UV seam, then this can cause artifacts), and export the LP and HP to Substance 3D Painter.
After baking, you should have an HP normal map baked into your LP, which you can export just by right-clicking in Substance 3D Painter. I would later use this map in the third workflow, Material Layers. Then, I imported the LP into Unreal and tested how the light worked with the sculpted detail.
I would make this exact workflow for every asset that needed sculpting (except my hero prop), for instance, the fountain:
Nanite visualization
Texturing Using Traditional Workflow
For my hero prop, I decided to go with a bicycle. It was a nice hard surface practice, which had some interesting challenges. Also, I used the traditional high-to-low poly workflow for this one, so this mesh is not decimated and is not using Nanite.
First of all, I started making a blockout as accurately as I could, using actual dimensions from the manufacturer, some pieces were the same from the other side, so I just made one side.
For the polishing stage, I separated all the parts into 3 main categories: beveled, subdivided, and sculpted.
- Beveled: Meshes that will only get bevels and soft edges. It was the easier of the 3, and usually, they were small pieces or those that didn’t need smoothness (e.g., the accessories on the bicycle handlebar.)
- Subdivided: I used this for the high poly. These were meshes that used supporting edges to get the shape we needed when subdividing.
- Sculpted: These were the meshes that needed some kind of sculpted detail. It was mainly the metal tubes of the bike, that was because I wanted to weld this together as in the real world, so I could get that nice metal-welding effect and add some scratches here and there.
High poly (beveled, subdivided, decimated mesh)
Sculpted detail (I know this looks subtle, but I wanted to practice retopo)
Retopo
Low poly (beveled, not subdivided, retopo mesh)
I know it was still a bit high in the polycount for a game, but I got some good advice from my mentor: this is your portfolio piece, nowadays polys are not so much of a problem for engines, you’re trying to make things look pretty, especially for your hero prop.
Texturing the Hero Prop (High to Low Bake)
All the parts couldn’t fit in the 0-1 UV space, so I decided to use different texture sets. Basically, I made all my UVs and organized them using the layout tool. After that, I took some of those UVs and assigned them a different material in Maya, that way, I could fit all the UVs in the 0-1 space, and then, when texturing, I would simply copy all the materials made in one texture set and duplicate it in the other.
For baking, I softened all the edges of the low poly and added seams wherever there was a hard edge. After I imported it into Substance 3D Painter, I used the settings below.
Since I had different texture sets, I figured out that the best way to texture this was using an ID map, separating the bike parts by materials (metal, plastic, leather, etc.), this way, I could have more control with the masks in Substance 3D Painter.
The rest is just texturing in Substance 3D Painter using smart masks, playing with smart materials, fill layers, filters, etc.
Texturing Big Assets with Nanite and ML
As I said before, for unwrapping, I used “transfer attributes” in Maya most of the time, and when this didn’t work, I went to RizomUV and made the UVs manually.
A problem I faced when doing Nanite and ML was that since I was going to bake maps, my UVs needed to be in the 0-1 space, but sometimes the assets were just too big to fit in there (I was using a TD (texel density) of 10.24, packing in a texture resolution of 4096 in most cases). My mentor told me one solution for this problem, and I came up with another one.
- UDIMS: Using UDIMs in game development would’ve sounded impossible years ago, but now it’s possible thanks to the technology and the updates of Unreal Engine.
We are basically unwrapping our mesh in a way that we can fit all UV shells in a single tile. If one tile is full, then we put the other shells in the next tile and repeat. What’s important is that every shell is inside the tile and does not touch borders. The tiles are called UDIMs. With this approach, texel density is not compromised because every shell has a 10.24 TD. So, every baked map (and mask) would be on texel density. However, I noticed that this approach has some cons, like:
- You will most likely have seams because you’ll need to make more UV cuts to make the islands fill in one UDIM.
- It gets tedious, time-consuming, and more expensive when you have more than 3 UDIMs. This is because you’ll need to create an ML instance per UDIM, and every ML is going to use at least one RGB mask. Also, if you sculpted, you’ll need to import an HP normal map per ML instance. So, in terms of optimization, this gets exponentially expensive the more UDIMS you use.
- Using 2 UV sets: this was a solution I came up with, and I’m super proud of it. We’re creating 2 UV sets for the asset, the first one is basically packing all the UVs in 0-1 space regardless of the TD, this is the set in which we will bake the HP data. Then, we create the second set, which will be at TD and is unwrapped in a way that we can use tileable materials without worrying about UV borders or visible seams.
This way, we avoid creating UDIMs, and even though our bake is not going to be at TD, at least the tiling textures will be. If you do this right, you won’t notice the difference. This is what I used to texture the fountain pool because I knew that it was huge and it would take a lot of UDIMs. However, this has some cons, which are:
- If you have too many sculpted details, you may need resolution, so this won’t work well. I used this in the fountain pool because I had mainly just edge wear and some cracks sculpted, but if I need to bake the fountain, which has more details, you may notice the difference in TD.
- Your RGB masks will also not be at TD, so you may get blurrier masks. If you’re doing highly detailed masks, then I wouldn’t recommend this approach.
UV set 1 (0-1 space)
UV set 2 (tiling texture)
Using Vertex Paint
I used Vertex Paint to texture big assets in the scene, like the walls with damage and dirt. I used two variations.
1. The first variation is a shader that blends 2 textures while using another texture for the “transition” between them. It uses Material Attributes to create an alpha mask in the area we’re painting. This mask is then subtracted in the center, so we get the white mask in the surroundings where we paint.
This was so helpful for painting damage on the walls because it made a more natural and procedural way of blending the bricks (layer 1), the concrete (transition layer), and the plaster (layer 2).
This shader was introduced to me by a global mentor. I learned how it worked, and I changed it to meet my specific needs. For instance, I needed to use POM for the bricks, and I wanted to have parameters exposed for the textures. This is what the final shader looks like:
Something to improve is the POM feature. If you look at it from the profile, the blending looks weird. This is because I couldn’t make the other layers work with POM, just the bottom one.
2. The second variation is a more traditional Vertex Paint shader that allows me to blend between 3 different textures. In this case, I used it to vertex-blend the rock floor with some moss and a puddle version I made using the water node in Substance 3D Designer. I used a specific mask for the moss, so it shows mainly on the spaces between the rocks:
Substance 3D Designer
What I’ve learned is that an easier way to make a material is to go from big to small shapes, don’t start with small details. Also, it is good to have some kind of “procedure” or steps you follow to make almost every material, for example, I like doing the height map first, then the color, and last the roughness, this way, you’ll have a more organized way of working.
Cobblestone Arched Floor
Now I’m going to explain how I made one of the materials I liked the most, the cobblestone arched floor.
- For the initial shape, I created the tile separately because I wanted to have more control over the shape of the tiles.
- Then I used a Flood Fill and Distance nodes so I could have more control of the gaps of each tile. It also helped to give some roundness to the tile shape.
- I then used a Clouds 2 node and edited it a bit so I could get some blurred, low-frequency noise to plug it into a Warp node. This creates a big-slight variation between some areas of the tiles, it helps to extend the gaps between the tiles a bit.
Lastly, I used a Blend node to subtract the gaps we had with the warped ones. This gives us some areas where the gaps are more compressed and some areas were the gaps are a bit more extended. Then I used a Levels node to have more control over the contrast of the gaps:
- Now is where we use one of the most useful nodes, the Flood Fill. This node is so handy, especially for materials like this where we have tiles or separated shapes and we want to give a bit of randomness to each tile.
- I used the Flood Fill to create random gradients to cut out some corners of the tiles, I also used slope blur with some noises to break up the edges a bit. At the end, I used a random color to create a mask and make every tile have a different intensity of disruption.
- I then used the same RGBA split to give some height variation, I combined that with some gradients so every tile is tilted and had a lot more variation in the height.
- I noticed that my texture was getting too dark, so I used Auto Levels and a Levels nodes to get back the value range.
As you can see, height variation adds a lot of interest to this material. After that I would continue creating smaller and higher-frequency damage, adding the moss and small rocks using Height Blend nodes. This is what the final graph looks like:
Using Substance 3D Sampler, Substance 3D Designer, and AI to Create Materials
I would love to talk about how I made one of the most loved materials of the project, the rock floor. I knew this needed to be one of the best-looking materials because it was in the main area. I wanted to test this new experimental workflow I found that combined AI, Substance 3D Sampler, and Substance 3D Designer. This workflow is taught by the awesome material artist Stan Brown.
First, I had to find an image of a similar texture. I wanted to use an image where the rocks were not so monotone, not so grayscale, but instead with a bit of color. This is the image I chose:
I then used an online AI tool that would let me scale the image; that way, I would have a crisper image where I could get more detail. I then imported the scaled image into Photoshop and applied several changes:
- I used Generative Fill so the AI could expand the image (because my texture was going to be 2K).
- I got rid of cracks, bumps, and hard shadows in the rocks because this could later cause trouble in Substance 3D Sampler.
- I offset the image so I could later apply Photoshop’s AI to make it a tileable texture.
After some iterations using Firefly AI, I finally came up with a tileable texture that I liked.
After this, I scaled the image to 4K so I could get more crispness. So once again, I used AI to scale my image twice, and I didn’t lose detail.
Now in Substance 3D Designer, the main challenge was to make the Flood Fill work because I wanted to give randomness to each rock. After many tries and experimenting with nodes, I found a way to make it work, is not perfect tho, but still gets the job done:
Now I could get access to random grayscale, random color, etc., to give each rock its uniqueness, I also had access to the ground mask. After that, I exported the 2K texture to Unreal Engine and tested it out.
I also created a puddle version using the water node in Substance 3D Designer, which my mentor taught me about, this would later be blended with Vertex Paint:
Most of my materials shared approaches similar to the arched cobblestone, like the bricks.
If you’d like to see more beautiful renders of other materials made in Substance 3D Designer and some time-lapses, visit my Material Showcase post.
The props that didn’t need sculpting were made straightforward. Most of them were small and could be in the 0-1 space, and they were textured in Substance 3D Painter with no high poly and just beveled edges.
The cloth props were made in Marvelous Designer and simulated to have more realistic folds:
Decals
To decorate the buildings' exteriors, I also used decals besides vertex painting. Decals helped a lot to break texture repetition in the scene and to better blend the meshes that were touching together.
I used Substance 3D Designer to create some custom decals but I also used some textures from Textures.com to speed up my process. I usually desaturated all the decals so I could tint them in the engine and make them more procedural:
Adding Depth to Your Scene
Thanks to the feedback from awesome artists like Brighton and Tanay, I noticed that my shots were lacking depth. The solution for this was adding the alley, background buildings, and the church.
I was able to do all of this in a span of 2-3 weeks. That’s because I already had all the modular kits completed, and the church was an assignment I had in school when I was learning modularism for games (so it’s like a little Easter egg!).
Making these changes helped me a lot to expand my shots and make the scene feel like a real environment and not like a movie set.
Adding Movement
Something that happens to a lot of environment artists is that they complete a beautiful-looking environment, but it lacks movement, it feels like a still image you’re just wondering around, don’t underestimate the power of adding movement to your scenes, this makes the viewer feel more immersed in your environment and empowers the storytelling. These are some of the tricks and resources I used to add movement to my scene:
- Cigarette smoke: Since I wanted the store to feel more lived in, I added some cigarettes and smoke. For this, I used a ribbon-based smoke trail made in Niagara particles based on Tim Engelke's awesome tutorial.
- Birds: Little plazas like this usually have some birds flying around, and since I already had the wooden bench, I thought that area was perfect for some birds to come by and eat some seed that someone would’ve given them while reading the newspaper (storytelling!). I also wanted to add some birds flying for some background movement. I used two free packs from the Unreal Marketplace for this: AnimalVariety and RuralAustralia.
Shaders and Blueprints
- Water shader: it’s a shader that automatically creates ripples when a mesh touches the water:
- Water falling: This one was tricky to figure out because I didn’t just want a water texture panning from top to bottom; I wanted the water to have the droplets effect so it looked more realistic. I’m lucky I found a good reference, thanks to Ivan Remez.
For all water things, I highly recommend watching Ben Cloward's YouTube Channel, he has some amazing content for water creation and shaders in general.
- Fish moving along spline: Fun to make. I basically set up a Blueprint that allowed me to assign actors to a spline so it could follow its path. I could control parameters like the number of actors, the velocity, and the offset.
- Fish moving fin: A basic shader setup that uses a SIN function to move the fish UVs from side to side using world position offset. This helps with the realism of the fish because it simulates their swimming while moving along the spline.
The goldfish was downloaded from Epic3DCrafters. I tweaked the textures to make them game-ready.
- Flags spline: I created a spline that allowed me to create the flags hanging on a cable in a procedural way. This spline has a scale parameter and the option to have just full-length meshes. Also, I made the material so the color of each flag changed depending on the object's position.
Daytime Lighting
For the final lighting, I used Lumen with ray-tracing reflections. My first approach was trying to imitate the lighting of the concept art. As you can see, the artist used a low-contrast greenish light that worked well in his concept. But when I tried it, I didn’t really like the result:
After this test, I felt that I wanted the colors to pop up more. To make it feel more vivid and not so cold, after all, that’s part of the storytelling of the scene because I thought of it as a summer vacation place people go to visit.
So, I thought about doing a sunset lighting. After some tweaks, here’s what it looked like:
Still wasn’t there, but it was getting better. You can see that the shadows are less dark because of the indirect lighting and that the warmer color helps to enhance the yellow and red colors in the main house, which is one of our protagonists.
I went back to my references and started analyzing what kind of lighting worked better for this type of environment. As you can see, most of them are not sunsets but noon or afternoon lighting with a saturated blue sky.
That’s when I knew the exact lighting scenario I wanted. The game changer was finding Karim Abou Shousha's channel. He has some amazing explanations on how to achieve realistic lighting in Unreal using Lumen. I highly recommend checking him out.
What I basically did was:
- I changed the default sky light to an HDRI Backdrop.
- I used the sky light as my main source for fill lights (blue tones for the shadows) and reflections.
- I played a lot more with the Post Process Volume, adjusting things like the exposure, the bloom, and the post effects, and I played a lot with the color grading tab.
- I made the fog more visible and scatter more light.
I’ve come to realize that lighting is such an iterative process. You must try and try different moods, change the light direction, play with the colors, shadows, etc., until you find the one that you “click” with. Is one of my favorite parts because is when your most artistic side comes into play.
Nighttime Lighting
A mistake I made during my first tries was that I gave the same amount of intensity to almost every light, and almost all of them had a yellow-orange color. I needed to have more color/intensity variations to fit the vibrancy of the scene. Huge thanks to Tanay Parab, who helped me realize this. Here’s the nighttime with lighting only:
As you see, I played a lot more with the colors and intensity of each light. In the real world, you'll notice that not every bulb is the same. Some are older, some are recently changed, and some are flickering or broken. I also went for a more blue-ish mood. You can see how the progress for this was:
Conclusion
Making this project was hard; it took me around 10 months to complete, but I’m so proud that I pushed forward even on the days when I thought I couldn’t do it.
Mistakes to learn from:
- Having more clarity on what was going on in my concept at an early stage would’ve helped me a lot, for instance, I lost a lot of time trying to figure out the houses surrounding the fountain.
- Not importing the blockout meshes with materials/colors assigned to them. I worked a lot of weeks just looking at gray meshes, and it felt like I was not progressing at all. Making a quick color blockout of everything helped me to understand how the composition would look, and it felt like more progress.
- Not extruding some modular pieces like the walls. This sometimes caused problems because of light leaking, so I had to create cubes that blocked the light from outside.
- 3 is the magical number. I should’ve made at least 3 variations of the doors, the windows, and the balcony. That way, the repetition is much less noticeable.
Tips and advice:
- You must learn to detach from your project and take care of your mind and body. The project you’ve been working on is not as important as you think. I was having this toxic relationship with my project. I was giving it way too much importance, but I understood that there are more important things in life, like your health or relationships, so book time for those. After this, I started to work faster, better and happier.
- Over-feedback is something that happens and can affect you negatively. Don’t ask everyone their opinion, have trusted people that you can count on your hand.
- Change the silhouette of things, avoid 90-degree lines, and try to break them up.
- If you’re ever between an artistic decision (what looks good) and a real-life decision, I’ll always go for the artistic one.
- If something is taking longer than you feel it should, move on to something else.
- Take feedback respectfully and not personally, do what they tell you to do because they are the ones who know. If you disagree, you should have good arguments for it.
- Try to work on your project even on the days you don’t feel like it: advancing 1-2% is better than 0.
I want to thank Aleksandar Danilovac, my mentor, for helping me to think outside the box with these new workflows and for giving me the confidence to trust the process and trust in my skills. Big thanks to the global mentors at Think Tank, and a huge thank you to my parents and my girlfriend, who provided crucial support for me. This is dedicated to my dog, Tommy, who passed away while I was completing this project.
Thank you so much for reading my breakdown, I hope you learned something. I’m now eager to start my career in the games industry, if you have any advice or opportunities you can offer, hit me up!