Max Kutsenko explained how he transformed photos of eucalyptus bark into a 3D texture with photogrammetry using RealityCapture and ArtEngine and showed us his camera settings.
Introduction
Hello guys, my name is Max Kutsenko, I am a texture artist at Ubisoft Barcelona working on an exciting unannounced project.
In the past, all my materials were built from scratch using Substance 3D Designer. Looking back, I believe it was the right path to choose as it helped me to learn the fundamentals and have a deep understanding of how the nodes operate. The type of freelance jobs I was doing also required me to keep everything procedural, so the end user could have full control of the final look of the material.
As time went on and I developed my skills, I started mixing scanned data into my textures to give that extra touch of realism, especially when it came to organic-looking surfaces like forest grounds and leafy soils. I was using the scans from sites like Textures.com or Megascans and was amazed at how rapidly their libraries were growing with so much content. As a curious person, I wanted to know how to do those scans myself, so I started researching the whole scanning process, also known as photogrammetry.
At the end of last year, I was lucky enough to take some field trips with people who already had prior knowledge of this discipline and see firsthand how surface scanning works using some professional camera equipment. I must say, I fell in love with the idea of exploring various locations in search of some awesome-looking surfaces that I can capture and convert into my materials. So I spent quite some time building my own camera rig, trying to get all the essential gear that would allow my captured materials to shine. This year, I am dedicating some of my free time to going to distinct places scanning the surroundings and learning along the way, building that valuable in-field experience.
Eucalyptus Bark
Long story short, as I was strolling through a local park, I stumbled upon an interesting-looking tree; at the time, I did not know it was an Eucalyptus. What caught my attention was the way the big chunks of bark were peeling off its trunk, creating a unique-looking silhouette. I instantly thought to myself that the bark would make a great material and allow me to practice tree bark capture, which I had not done much. As the tree was huge, I decided to capture a roughly 2-by-1-meter surface area, which tends to tile pretty well on a vertical tree. I had a foldable measuring tape which came in handy.
For scanning itself, I use a Canon R7 camera with an 18-150 zoom lens. I have been trying to keep my focal lens between the 35-50 mm range to avoid any image distortions, for the bark, I settle on 40 mm. I set my aperture to f/8 as we want to keep everything in focus. I already have my custom white balance set by shooting a grey card of X-Rite Passport (another great investment) so my images are consistent, but you can also set a white balance in the post process when you shoot in RAW format.
I keep ISO as low as possible to avoid introducing an undesired noise to my textures. Then I take a few test shots to check for my exposure as well, I look at my histogram chart, and if it falls more or less in the middle, I am good to go, it means I won't have any white or black clipping due to it being over- or underexposed. As I use Godox ring flash, my shutter speed is locked at 1/200, and I set a flash power at ¼ power. This flash was an expensive piece of equipment but a super valuable one, I do not have to depend so much on lighting conditions of the day because if the flash is strong enough, it can override direct sunlight, cast shadows, and ambient occlusion shadows. This means I don't need to wait for an overcast day (ideal conditions for scanning) or do any shadow removals, delighting in the post process and saving myself some time.
Speaking of time-saving, I also have a cross-polarization filter attached to the flash and lens. It removes reflections from the captured surface, so once again, I do not need to do it in post-processing.
To sum it all up, my camera settings were the following:
- Image Format: RAW
- WB: Custom (Shot Xrite grey card)
- Focal lens: 40 mm
- Aperture: f/8
- ISO: 200
- Shutter speed: 1/200
- Flash power: 1/4
Before the capture, I shot my X-Rite ColorChecker Passport by placing it on the tree. It would be helpful later on when we deal with the color calibration process. I then proceeded to take around 300 images of the tree bark by making small steps to the side with each shot and then moving up following the curvature of the tree. It seems like a lot, but I found that the more images I take, the better the final result I get building a high poly model in the RealityCapture software; it handles a lot of data well.
While I shoot, I hold image-stabilizing handles with a remote shutter button. Those handles are made to work with the ring flash and greatly improve the usability of the whole camera rig. I purchased those from here after getting advice from my fellow material artist Enrico Tammekänd. Another viable option is to use a tripod or monopod to stabilize your shooting, but I found the handles more convenient, especially for vertical surfaces.
The most important thing to remember when you shoot is that there needs to be an overlap between the images so the software can look for common features between 2 images and stitch them together. This was my mistake when I was starting out, shooting a small number of images and not covering the whole ground, so I was getting broken models as RealityCapture needed more information to work with.
Once I had captured the images, the next step was to process them in Adobe Lightroom. The basic idea here is to remove any post-process effect your camera might have automatically applied to your photos to make them more pretty. This is why it is important to capture your images in RAW format so you have access to all these modifications and can easily manipulate image data. I remove any vignetting, chromatic aberration, auto exposure, or lens distortions and set the image as linear. I also use X-Rite ColorChecker image to do a color calibration to make sure my colors are accurate and match with a set standard of scientifically produced colors.
Once I processed the images, I batch-imported them into RealityCapture (RC) to do my 3D reconstruction. It's the only software I have been using up until now because I like its interface, it is pretty fast in my opinion and it was recommended to me by other artists. It was also recently acquired by Epic becoming part of its ecosystem, which makes it free now, so it's a big win.
The first thing I do is to align my images, as RC explains, “Image alignment or registration is a process of image feature detection and matching common tie points to create a sparse point cloud.” After that, I make a high poly mesh, which I typically reconstruct in “Normal Detail” as it gives me enough density and resolution, but if you have a high-end graphics card, you can even go for “High Detail” reconstruction. My “Normal Detail” mesh is finished and sitting at 72 million tris; the next step is to pass color information to my model. For that, I use a texture button; the only issue is the reconstruction produces such high-density meshes that RC struggles to unwrap them before texturing, it either takes too long or crashes. The solution I found on the internet is to simplify the mesh first. I take my 72 million tris mesh and with the “Simplify Tool” reduce it down to 20 million tris. After, I can easily unwrap and project the color texture information to the model.
As we are interested in material creation in our case, we need to convert it into tileable texture maps, and the first step is to bake the high poly information onto a plane. As the tree trunk is round, we will also need to bend our plane so it wraps around the trunk tightly, this way, when we bake, the rays can capture all the information. I prefer to do this process in Houdini, it handles dense geometry pretty well. I exported my simplified trunk mesh in .obj format and moved it into Houdini, I found this format works best for me. If your 3D package of choice still struggles to process dense geometry, you can simplify it even more in RC, as long as the model retains its basic shape, you are fine.
In my particular case, I decided to start with an evenly divided cylinder, deleted unnecessary geometry, positioned it under the high poly trunk, and then moved the individual vertices a bit so the cylinder better conforms and matches with the high poly trunk. Once I was happy with the positioning, I UVed the cylinder and exported it back to RC.
I do my color, normal, and displacement map baking inside of RC as well. I use the “Texture Reprojection” function to do it. I select my high poly mesh as the “Source Model” and the baking cylinder as the “Result Model,” then hit reproject. The process is fairly quick, and here are the maps I get from RC bake. It's not yet suitable as it's square and the edges have a seam, but we are going to fix it with another software called Unity ArtEngine.
I use ArtEngine for the following operations: removing unwanted parts from textures, clone stamping, seam removal, roughness, and AO map generation. I heard some really good things about this AI-assisted software, gave it a try, and was truly impressed with its capabilities. Unfortunately, it's no longer supported, but you can still download a legacy version.
I drop my RC-generated 8K maps into the ArtEngine viewport and downsize them to 4K, this way the software processes images faster. Then I generate a roughness map from the color map, there is a nice drop-down list of presets we can select from, for instance, wood is a match for us. We also generate an ambient occlusion from the height map.
Next, I compose the maps into a PBR material, so from now on, any manipulation I do will be applied to all my maps simultaneously. With the “Transform” and “Resize” nodes, I set the material ratio to 1:2 so it becomes 2K x 4K. After that, I use “Seam Removal” to make my material seamless, so we can easily tile it on a tree. The blue frame represents where the seam will be removed. I adjust the frame and hit “Execute.” AI analyzes my texture and naturally fills the seams. If you hit “Tab” in 2D view, you can see the tiling, and here are the results before and after.
The last step is to analyze our maps to see if there are any artifacts. Indeed I find some in our normal map around the edges where some of the bark pieces peel off, as it gets detached from the main trunk, those spaces become empty, and RC fills them up with blurry normals. I would like to get rid of those and choose to use the “Clone Stamp” node. Pressing Alt, I can sample any area of the normal and paint out the unwanted part, so I cover all the blurry or strange areas with this brush.
It is a bit of a tedious process, which took me more than half an hour to do, but it is worth investing your time in it to get much cleaner textures. Once it's fixed, I output my final textures for the final render in Marmoset Toolbag.
For my final presentation, I used the “Tree Trunk Generator” node in Houdini, which allowed me to create an interesting-looking tree trunk super quickly with lots of parameters and automatic square UV unwrap. In Marmoset Toolbag, I placed my final textures, created a material, and applied it to the trunk. Then I simply make several renders with different lighting conditions, so you can visualize how it could look at different times of the day.
I have already said it in the past and keep saying it: I spend the same amount of time on the final presentation as I spend on making actual material or assembling the scanned data. So my advice would be not to rush the final part because great lighting and composition are what will make your material shine and get noticed.
Conclusion
All in all, I would say it took me around a week to complete this project. Without a doubt, it was a challenging one due to the nature of the eucalyptus tree, which tends to shed its bark, thus creating complex shapes when it detaches from its core. I remember having a hard time capturing the bark in those places. Looking back, I would even say that I should have taken more images around those extruded areas to have a better reconstruction. Anyway, I managed to clean it up in ArtEngine, but perhaps that could have been avoided.
For those who want to get into photogrammetry, I would suggest you keep trying to scan distinct surfaces with varying degrees of complexity, of course, you will make mistakes, but you just need to learn from them and get better with each scan, this field is quite technical and requires a lot of patience and perseverance.
Thanks for reading this article, hope it has been helpful. Happy scanning.