logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Realistic Prop Creation for AAA Games in Unreal Engine 5

3D Props Artist Shyamsagar S has shared an enormous breakdown of the Electric Generator project, explained how the prop was modeled and textured, and demonstrated lighting and rendering processes in UE5.

Intro

In this article, I will go through the overall process of how I approached creating an electric generator in use, stored outside, on the ground, and run down by bad weather. To create the project, I utilized PureRef, Maya, Substance 3D Painter, Photoshop, and Unreal Engine 5.

1 of 2

Planning and Timeline

The asset was planned for two weeks. Here you can see the overall breakdown of the task done on each day.

Reference Collection

The asset creation started with some research and reference collection as per the brief. Once the primary reference was finalized, additional details and references were collected regarding dimensions, material properties, etc. PureRef is quite handy to collect and organize all references, inputs, mark-up, and so on.

1 of 2

Maya Setup and White Box

Since Unreal Engine uses the Z-up left-hand coordinate system, the Maya axis is set to Z up and viewport units in cm in Windows > Setting/preferences. Then the default view is reset in the viewport.

1 of 2

An average human (approx. 170cm) is used for scale. Then, the white box is made according to the real dimensions of this particular electric generator, i.e. 52x33x46 cm, and it serves as a bounding box so we won’t go out of scale.

Setting Cameras for Basic Camera Matching and Blocking

At this stage, I went for a manual camera match approach, with multiple images from different angles to get 80% to 90% accuracy for the volume and shapes. I was not looking for a 100% accurate match as I wasn’t planning to make a digital double or texture projection. However, I would suggest, even if it’s for a single image concept art or real reference, it’s good to do a quick camera match and get all the basic shapes and volume in place, then refine through observation.

  • First, I selected the best images from the collected reference and prepared them into a 1k square image and did a rough alignment by superimposing one over the other in PS and saving it one by one. This helps in keeping all camera image ratios the same.
1 of 2
  • Then, all cameras are created with proper names and the image planes are added. For image plane attributes, the Display can be set to "Image plane attributes>looking through camera" so the image is only seen for the particular camera. Also the "Placement>Depth" can be increased, so the image goes behind the mesh. 
1 of 5
  • Here, I am marking the faces of the base mesh with a color coding to easily identify and match the front and back of the object in perspective with the correct camera image planes. For example, here the red-colored surface is the front of the generator (facing towards the X-axis), green left, and blue top.
  • Then, the white box/grey box is aligned for an initial camera match. At times, we have to adjust the focal length to get a good match and a default focal length of 35mm won’t work. If we don’t have all the image properties like focal length etc., some guesswork is required. For that, it’s good to study how photograph looks are certain focal lengths because the distortions make it really confusing to understand the shape and volume of the asset for a better camera match. There are a lot of resources on the web which explain this subject. Here are two examples.
  • A two-panel viewport is used one for perspective and one for the cameras. All the modeling is done in the perspective view and each camera is adjusted using the camera tools like track, dolly tumble, etc. The cameras are switched and checked simultaneously to achieve a good match. I would suggest not to lock the camera until every camera is almost matching. It’s always good to start the match with big and simple shapes like a box or cylinder and then move into details.
  • After analyzing the references, I could see the main body of the asset can be made from a quarter for the base volumes and shape and then manipulated to make it unique later on. I also add loops and then apply materials to faces to differentiate between the different panels and details of the generator.
  • In each stage of blocking, the camera is adjusted and matched as well as building up the major details. It’s also good to analyze how much tri polygon we can use and distribute to get a better low poly and high poly asset.
  • Blocking camera match.

High Poly Creation

Once the blocking is good, creating high poly and low poly is much easier. The process is only about refining the silhouette, volume, shapes, and details with good edge loops. Also, make sure the high poly is good for normal map bake respecting angles, avoiding 90-degree sharp edges, how light is reacting on the faces, edges, etc.

Stages of the high poly model:

1 of 2

Final high poly and high poly camera match:

1 of 5

Low Poly Creation

  • The target low poly count is 7000 tris/3500 polygons.
  • The low poly is created from the initial blocking and the high poly by removing and adding in loops. If we follow a good loop flow from the beginning of the modeling process, this process becomes quite easy. I also made sure to use there is a balance in tri to get the best silhouette and shapes.
  • Once the low poly is done, a mesh clean-up and a reset transform are done. This helps to find and clean up all n-gon, lamina faces, etc. and get the pivot right. This QC is done for high poly as well.
  • Clean-up using Maya's clean-up tool.
  • Reset and freeze transformation.
  • A single material is assigned and named. As for this asset, we are going to bring all the material properties through a texture map with PBR workflows. Also, a pass of smoothing groups is done.

Final low poly and low poly camera match, final tri-count is 6953:

1 of 3

UV and Bake Prep

  • The UV is finalized with a uniform Texel ratio. Small objects are scaled up a bit like nut bolts etc. Here I am going with a complete unique/atlas map approach with alpha (alpha test/alpha blend) for grills. I went for a Texel ratio of 1k /m for a check and the asset will hold for both 2k and 4k. If a complete optimized map is required, we can go for one-half of the asset but it will make it less unique and have to compromise on many design factors and texture tiles.
  • Texel ration check with a 1k/m cube. Here we are getting more than 1k/m for a 2k or 4k texture for the generator.
  • A well-packed UV layout/island is made.
1 of 3
  • Final smoothing group and all naming are done for bake prep to bake "with mesh by name" in Substance 3D to avoid artifacts and bake bleeds.
  • A Material ID is created in high poly for easy selection of mask. Texturing is done in Substance 3D Painter.
  • The mesh is exported as .fbx and a bake test is done with a low resolution of 512/1024 to find artifacts, then, if all is good, the final 2k/4k is baked.
  • Low poly imported to SP and UV, material, and high poly import are checked.
  • Even after everything, the moment of truth: bake artifacts! Visit back the model, settings, etc., and find a fix and re-bake. Here, for example, it was a quick fix with a simple rotation and increasing the resolution and subsamples.

Final bake and baked maps:

1 of 2

Texture and Material Creation

  • Once the bake is good the texture is built up from Base Color, blocking to final details.
  • Here, I went for a look that is not too damaged and abandoned, yet has gone through wear and tear, which may happen due to long-time usage and damages caused by nature and its surrounding events. For branding, I followed the reference.
  • Substance 3D Painter layers:
  • Texture initial and final passes:
1 of 3
  • Texture maps:
1 of 4
  • Finally, I checked for PBR validation for better PBR renders. All green is good. Little bit yellow fine. Try to avoid red and purple, which would be either because your texture is too dark (high black values) or too bright (high white values).
  • Iray render:

Integration and LookDev in UE5

  • A blank project is created with ray tracing turned on. Then we wait for the shaders to compile!
  • Here we will be using UE5 lumen and Ray trace shadows and not lumen virtual shadows or bake lighting. I learned many of these techniques used in lighting here from the free resources of Unreal Engine Mentor William Faucher.
  • In Settings>Project setting>Rendering, I enabled "Support hardware Ray Tracing" and "Use hardware Ray Tracing when available." Please note that this will only work if a GPU supports hardware ray tracing, for example, RTX NVIDIA GPUs.
  • An empty level is created, and a few folders are needed to organize all the contents used in the scene.
1 of 4
  • All the required models and textures are imported. Generally, keep default settings, here I opted to import material and also auto-generate collisions.
1 of 2
  • A simple master material is also created in which a Normal Map can also be added just to kick start the greyscale lookdev.
  • Material instance is created for the backdrop, generator, and for the refection (Albedo V =0.9/Roughness 0.05) and matte balls (Albedo V =0.18/Roughness 0.4). The matte and reflection balls will help in understanding and controlling the exposure of the scene.
1 of 3
  • The models are added to the scene, and to get some light the key light is added using "Rect Light". This light is similar to real-world softbox lighting.
1 of 2
  • A cinematic camera (Cine Camera Actor) and Sequencer are added. With the help of composition overlay grids, the camera can be set. For this, you have to select your cine camera and switch to the cinematic viewport.
1 of 3
  • Then, the second viewport is added to work on adding further lights, Post-Process Volume, etc. in perspective or other orthographic views while and check the updates in the cinematic camera.
  • Before moving into lighting, the manual focus distance is adjusted for the camera angle. The picker can be selected and clicked on where camera focus is required, and values are adjusted accordingly.
  • An HDRI Backdrop is added and moved down so it does not intersect without a ground plane.
1 of 2
  • To get started with neutral lighting, I imported the "Tomoco_Studio" HDRI and adjusted the intensity and rotation. You can find tons of free HDRIs online. Try to use 2k and above to get good quality.
  • Post Process Volume is added.
  • Does not matter where Post Process Volume sits if you switch on the Post Process Volume settings>Infinite Extent(Unbound)
1 of 2
  • In order to disable auto exposure of the scene and to get better control over exposure, keep Metering Mode on Manual and enable Apply Physical Camera.
  • Now The Exposure Compensation can be adjusted to get the desired exposure without increasing the light intensity of any lights added.
1 of 2
  • A light mass importance volume is added and placed where we want the light to be calculated. This helps the light process to happen within light mass volume bounds and avoids processing the entire scene. Then a sphere reflection capture is also added and reflections are built.
1 of 2
  • Here we are going for a 3-point light setup. All lights are set to movable.
  • Key light (Intensity 1.2cd, Color-warm, Cast shadows – ON, Samples-2)
  • Fill light (Intensity 0.6 cd, Color-Cold, Cast shadows – OFF, Samples-2)
  • Back light (Intensity 0.5 cd, Color-White, Cast shadows – OFF, Samples-2)
1 of 3
  • HDRI (Intensity 0.35). The exposure is also adjusted in Post Processing Volume. Note: This is an iterative process till we get the desired output. Also, you can increase different Lumen samples in Post Process Volume settings to get clean results depending on hardware performance.
  • For better smooth ray traced shadows with less banding, "cast ray trace shadows" is enabled.
1 of 2
  • To make a simple turntable right click on the sequencer and add the cine camera and actor-model.
  • Then set the start key and end key for a 360 rotation in the asset actor Transform>Rotation>Yaw.
  • The curve is selected in the curve editor in Sequencer and made to linear interpolate.
1 of 2
  • The sequence/shots and viewport hi-res screenshots can be taken from the grey model. It’s good to keep the warmup settings so the scene is ready before capture.
1 of 2
  • Then, I went ahead to make a simple master material for the textured version.
1 of 2
  • I did some minor tweaks in the shader and lighting until the desired output was achieved.

Hope everyone reading this would get some valuable information. You can find all the final HQ renders here. You can also find me on LinkedIn, Twitter, and Instagram. Thank you.

Shyamsagar S, 3D Artist

Join discussion

Comments 1

  • Ocana Méndez Jorge

    Just wanted to say that if you are using Lumen, it makes no sense adding a Lightmass Importance Volume and Sphere Reflection Captures, as they will do nothing at all. Those will just be needed if you are using static lighting, which does not seem to be the case.

    Good job. I love the result.

    0

    Ocana Méndez Jorge

    ·a year ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more