Julien Rollin shared the working process behind the Black Star M+S Tire project, explained why RealityCapture was used, and talked about preparing textures for the work.
Introduction
Hello! My name is Julien Rollin. I'm an 18-year-old CG artist from France. I started 3D out of curiosity around the age of 10 and joined ArtFX this month as a first year student. I'm a recent high school graduate, so I decided to join this school in order to turn this hobby into a real job.
I've always loved to make things. I started trying to produce electronic music when I was about 8 years old and I got my first laptop at that time. A little later, I started looking at 3D animation out of curiosity. I really liked it and started this long and exciting journey!
When I gained confidence in my skills, I started to freelance during the 2020 pandemic making visuals for websites. Then I joined a neo-bank to do some 3D content for their social media for a year. Now I'm a CG artist at Scenario, a 3D scanning mobile app.
Photogrammetry
Last year, around August, I started photogrammetry with a mobile app called Polycam when the photo mode had just been released. It was revolutionary to me, the idea that you can get a 3D model from photos was just stunning! I didn't know anything about it before, it was completely new. You can make a model with textures that could take you days in a matter of minutes. Converting the real world into 3D is something I had never really considered, I always did very cartoony stylized renders.
I played around with this app a lot with larger and larger subjects but I found myself limited by the number of photos I can take. The resolution of the textures/meshes was also not enough for me. I started looking at RealityCapture and the possibilities it offers and I just realized that I could do even more with it!
The Black Star M+S Tire project
To capture the initial data, I split my photoset into 2 parts. I wanted to do a full 360° model, which I had never done before. I did one set with the top part and another with the bottom part of the tire. Since I don't have a studio setup with black background (scan in the void basically), a DSLR camera, and a ring flashlight, I had to do a DIY setup with what I had. I used 2 trestles to support the tire to make sure I didn't have any shadows in the lower part. Instead of a DSLR camera, I used iPhone 12 Pro. It has a 12mpx camera, which is pretty low, so I have to take a lot more pictures to get crisp textures and maximum detail to render at a close distance. With a DSLR camera, 200 photos or even less would be possible, but always consider taking more photos with a smartphone.
To capture these photos, I waited for an overcast day to get as little light on the tire as I could. To remove as much lighting information as possible, which could be baked into the final texture, I used Lightroom and removed shadows and highlights. Also, on my phone, I exported everything to Apple ProRAW format to get it all in DNG. Although I'm sure I can change the settings of my photos without any problem.
Once I'm satisfied with my control points, I can merge them using "Merge Components" and get my complete 360° model. There may be a problem where some artifacts are still present. Just select the area of the point cloud and disable the cameras connected to it; this is also why my final reconstruction is only 571 images. To clean the mesh, I used only ZBrush. I started by cutting all the geo blob in the holes to make it cleaner. I smoothed out the noise a bit in some areas and that's it! Afterward, I made a low poly version of 500k polygons in RealityCapture. I used my 5 million poly mesh to reproject all the details with the displacement and the normal map.
Once I'm satisfied with my control points, I can merge them using "Merge Components" and get my complete 360° model. There may be a problem where some artifacts are still present. Just select the area of the point cloud and disable the cameras connected to it; this is also why my final reconstruction is only 571 images. To clean the mesh, I used only ZBrush. I started by cutting all the geo blob in the holes to make it cleaner. I smoothed out the noise a bit in some areas and that's it! Afterward, I made a low poly version of 500k polygons in RealityCapture. I used my 5 million poly mesh to reproject all the details with the displacement and the normal map.
For me, RealityCapture is the best choice in terms of quality and speed. It meets my needs perfectly as part of a full offline rendering workflow. I would appreciate more options for point cloud reconstruction with the ability to delete points, as well as alignment, which doesn't offer many options. This can be tricky when you are new to the software. Other than that, RealityCapture is by far my favorite photogrammetry software. If you start using it, everything is really well explained and detailed thanks to the tutorial integrated into the program. Its main strengths are without a doubt the speed and the quality of the mesh.
Textures
I made sure to export a high-resolution mesh before cleaning it up in ZBrush. I use this mesh to make certain that I can get the best displacement map with the maximum data from my high poly mesh. During the rendering phase, I didn't even use the normal map, displacement was perfectly fine. Some people make the displacement map in ZBrush, but I prefer to stick to RealityCapture. I used the "Texture Reprojection" tool to bake my high poly details through a normal map and a displacement map.
I only did 2 UDIMs and created 16K resolution textures. In a workflow using Maya and Redshift, this is a complete overkill since your textures must be loaded into your GPU's VRAM. This can slow down your rendering considerably! RealityCapture offers the ability to export in such a resolution, so I gave it a try. I have a decent GPU, so it wasn't a problem for this project. For the diffuse map, I simply used the "Texture" tool without texture transfer. Roughness and metallic maps were made using Substance 3D Painter as I don't have a stereo photogrammetry setup to capture the reflections of my subject.
The look development part is really important in order to match your reference model (in my case the original tire) with your 3D model. I had never done it before because I had never understood the purpose of it. By doing that, I realized at first that my roughness map created on Substance 3D Painter was too reflective as well as my diffuse map, which was too bright. In a look dev scene, it is very important to keep it as neutral as possible. During my search for references to create my own scene, I often noticed that people add props or extra elements. You shouldn’t do that, keep it as minimal and neutral as possible! Look dev scene is not a presentation scene, these are completely different. A Chrome and Gray ball plus a Macbeth Chart are just what’s needed.
Presentation
To present the final model, I wanted to make it simple but still challenging for me. For each project, I really want to learn something new and useful. I'd never done studio lighting and this is something I wanted to try. I discovered the work of Roman Tikhonov, who helped me a lot with packshot lighting references. My lighting setup is actually quite simple. I used an HDRI from HDRI Haven (Studio 1) and an area light for the inside of the tire, that's it!
Early render test
My first renders were bad: no roughness, bad framing, and random background. At first, I was happy with it, but the next day I started all over again several times. I wanted this project to be short, about a week. But it ended up being a whole month between the feedback I received and the lighting tests.
In terms of framing, I did everything from my own ideas, but I focused more on the close-ups since I exported high-resolution maps. Displacement really worked well, the only limit was my diffuse map resolution.
For post-production, I did some minor tweaking with basic color correction, a little more sharpness in my render, and some bloom. I like to keep things subtle, I try not to exaggerate effects or depth of field in general. Here, I used Magic Bullet Looks in Premiere Pro, which is perfect for my needs. I normally use Nuke, but I wanted to try something else.
Conclusion
My first piece of advice to beginners is to scan with what you have. Just because you don't have a DSLR camera or a turntable setup doesn't mean you can't make great captures. It's more work for sure, but you can certainly do good things with a standard phone.
Moreover, when I started photogrammetry, I wanted to scan everything around me. It's pretty addictive, but I wasted so much time scanning useless stuff. I had bad lighting conditions, uninteresting patterns, or even no use in the end. Try to focus on one thing at first: rocks, street props, or statues are great to start with. Taking good pictures is not something easy at the beginning, it comes little by little with experience. Try to focus on that, the software is not really complex once you understand the basics. Don't hesitate to ask for feedback on Discord or elsewhere, it will help you a lot! A lot of industry professionals are on Discord, you can get very valuable feedback.
Photogrammetry software is like rendering engines: each has its strengths and weaknesses, but it's not the software that will get you excellent reconstructions, it's your skills and experience. Taking your time and being patient is the key.
Thank you very much for your time, it's a real pleasure to participate in an interview with 80 Level. I hope this helps you dive into photogrammetry! If you are interested in my work or have any questions, you can reach me on Instagram.
Julien Rollin, CG Artist
Interview conducted by Arti Burton
Keep reading
You may find these articles interesting