Giuseppe Alfano shared how he creates procedural materials, showed the Stable Diffusion to Substance 3D Designer workflow, and explained how the materials are blended.
Introduction
Hi everyone! I am Giuseppe Alfano, a 3D artist focusing on real-time rendering and optimization for interactive applications. I currently work as a Lead 3D Artist at Centounopercento/One O One Games. I'm always looking for new technologies and innovative workflows to speed up processes and focus on the creative side of things, which, of course, is the best part.
Growing up surrounded by PCs and consoles, I developed a strong passion for video games. The technical aspects of video game development got my attention as a child, and I often asked myself, "How do they do that?" So I started my learning path at BigRock, which gave me the opportunity to better understand how CG works in general and where I wanted to specialize later on.
In 2016, I started working in the B2B industry at H-Farm Innovation, where I had the opportunity to work on a wide range of 3D projects, including virtual reality simulators and fashion-related experiences. I also had the opportunity to train fashion brand designers on how to approach 3D material creation using Substance 3D Designer and Substance 3D Sampler, starting from physical samples.
Currently, I am at Centounopercento, working on awesome projects that I am unable to discuss at the moment. If you're interested, please keep an eye on our social media for updates soon!
Substance 3D Designer – Love at first sight
When I started working in 3D, I quickly became a huge fan of the Substance suite. Back then, I needed something to rapidly generate textures for lots of assets, and Substance 3D Painter worked like a breeze, but then I realized that Substance 3D Designer was the right choice for my personal approach to procedural texturing. Having the ability to quickly iterate and create random variation, avoiding manual old-style texturing was my biggest goal.
I'm a big fan of procedural material creation, being able to control every single element of the material gives me the ability to build a graph that's heavily customizable to quickly react and adjust the look and feel of the material based on feedback from art direction.
Starting with material authoring can be quite scary, in my experience. My advice is to start by studying the approach and mindset of great artists such as Joshua Lynch, Rogelio Olguin, and Cem Tezcam. The list could go on indefinitely!
Text-to-Image AI
When text-to-image AIs like Midjourney and DALL-E came out, I was fascinated and strongly attracted by the results that users were posting on socials, but nothing more than saying, "Hey... this looks awesome". When Stable Diffusion was released, I was able to try the AI by myself, locally on my PCs. I started wondering, "What if I could make 3D materials out of it?"
Unfortunately, I'm not a dev at all, so I'm using a packaged GUI of Stable Diffusion to generate images to start with.
If you have made it this far, you may have noticed that I am somewhat obsessed with workflow optimization and time-saving automation to reduce the amount of time wasted on repetitive and tedious tasks. The ability to script an AI to generate a vast number of materials from a simple text input is something that I wanted to experiment with, and this has been the main idea behind my experiment so far.
Stable Diffusion to Substance 3D Designer
Let's dig into the fun part now! As I said, I use a packaged GUI of Stable Diffusion to generate the source images to work with later on. First of all, kudos to the creators and contributors behind the NMKD Stable Diffusion GUI, which is incredibly fast and easy to use. Also, with the latest release, we are able to generate already seamless images, which is incredibly useful for the process.
The process is straightforward: type your prompt into the GUI and click Generate. Magic!
In my case, I wanted to generate out-of-the-box tileable textures. At first, I struggled with the prompts because I didn't know how to properly "interact" or "chat" with the AI. After hours of experimentation and countless failures, I ended up using a standard pattern like "Top Down Photo of $Subject". So, for example, if I wanted to generate a rocky ground material, I would type in "Top Down Photo of a Rocky Ground." From there, I could start adding other information to the prompt to add detail or expand the results. For instance, I could add the words "Mossy" or "Wet," making the resulting prompt "Top Down Photo of a Mossy Wet Ground."
This approach made me think. What if we blended two materials instead of having moss embedded in our rocky textures? With that in mind, we could generate a "Top Down Photo of a Mossy Wet Ground" and a "Top Down Photo of a Rocky Ground" to blend later in Substance 3D Designer, trying to achieve much more variation and control over the final material.
Challenges
The biggest challenge for me was trying to speak to the AI efficiently. Scrolling through my Twitter feed, I stumbled upon Phraser, it is a great place to start. Looking at the inspiration page, I was able to see the results of user-generated images and, more importantly, the prompts used to generate the images.
Substance 3D Designer Workflow
Once I'm happy with the resulting images, I jump into Substance 3D Designer to generate a 3D material out of a single input. I built a simple setup that receives an image as input to process the needed textures and quickly generates a preview of the material right into Substance Designer's 3D viewport.
Maps Generation and Upscaler
Usually, to speed up image generation in Stable Diffusion, the target resolution is set to 512px, this low-res image is useful to preview the material but not ideal for map generation. So at this stage, we need to upscale the image to a much more detailed one. Video2x_gui does the trick, just drag and drop the low-res image and set the desired target resolution. Upscaling the source to a 2K texture at least gives us a much more detailed input to work with.
Now that I have a sharper and more detailed source to work with, I can simply swap the input in my MatGen.sbs with the HighRes, make a few changes to parameters if needed, and finally, export the resulting maps. The next steps are lighting, and rendering!
Blending Multiple AI-Driven Materials
Blending AI-generated materials made with this technique is also really interesting. We can mix things up and randomize the final result even further just by blending two or more materials.
Render Setup
To render the final images, I use Marmoset Toolbag 4, which is incredibly fast, powerful, and easy to use.
Most of the time the setup is fairly simple, just a bunch of spot lights, a rim light, and a custom HDRI from Poly Haven to boost the GI a little bit.
From there, we're just one step away from completion, the last mile is done in Photoshop. It's super simple but needed tweaks to make the render pop. I'm a big fan of the Camera Raw filter. I add some vignetting and sharpness to the final render.
Conclusion
Besides the first days of study, which took some trial and error, the whole process now is really fast, the Auto Material could be improved and automatically fed with inputs from Stable Diffusion using the Substance Automation Toolkit. This is something I want to experiment with soon.
You can find me on ArtStation, LinkedIn, Instagram, and Twitter if you have any questions.
Giuseppe Alfano, 3D Artist
Interview conducted by Theodore McKenzie
Keep reading
You may find these articles interesting