VFX Artist Aquib Hussain has shared an enormous breakdown of the Jibaro-inspired animation made in Houdini, explained how the water was set up and how the dancing animation was created, and discussed rendering in Arnold.
Introduction
My name is Aquib Hussain, I am 23 years old. I have pursued a Bachelor of Science degree in the field of Animation and VFX at AAFT school of animation. I am currently working as an FX Technical Director at DNEG (Double Negative) in one of its branches in India. DNEG being my first full-time job has led me to work on some of the coolest and most exciting projects, such as Stranger Things: Season 4, Devotion, and many others.
Getting Acquainted With Houdini
I always had this set goal to be an FXTD. I used to work with some other DCC packages, which took a lot of time to do a certain task. While they were all not bad and had their own strengths, for me some things became a lot repetitive.
So, me being a very lazy person, I started researching ways to automate certain tasks that led me to discover this incredibly awesome program – Houdini, and a word affiliated with it was proceduralism. So something in my instinct told me that this is going to save a lot of effort down the process so I didn’t wait long and started going through pretty much all the resources I can find on how to get started with Houdini and found my way around it. Turns out there’s a lot more we can learn on the internet than we anticipate. So in a way, my own laziness motivated me to learn Houdini.
The Jibaro-Inspired Project
I started binging this show on Netflix named Love, Death+Robots, and I was blown away, especially by the Jibaro episode, which showcased some of the best animation and FX works I have ever seen. I immediately realized that a lot of fanart and cosplays were definitely on their way. So just like most artists, I was also inspired. But Inspiration wasn't enough – I still needed to figure out the best way to get it done in a time frame while not disturbing my work and sleep schedule.
So, I set a goal in my mind to make something which is somewhat resembling the original and is fairly optimized. The first thing I did was to set up a previz animation and a camera in Maya which I locked in at this stage itself.
The Water and the Terrain
Before we set up the water simulation, there's one thing that's very important to prepare first i.e, the environment itself. FLIP solver is very good at simulating water behaviors so as long as the prerequisites are right we can pretty much just run the FLIP sim and get pretty good results.
To elaborate further, I'll use the terrain I created for my project.
I started with a grid and sculpted areas as I want my water to move. If I want to make the flow faster in an area I'll just decrease the depth, and if I want an area with slower movements I'll increase the depth. It's that simple! This way we can literally art direct the flow of water. The splashes and interaction as well were just set up through basic collision VDBs.
Now to the Meshing Technique! The first question that arises is why, why do we want to "not" use the traditional method of water meshing where we just convert the flip particles into VDBs at a super less resolution? Well, I am in no case and scenario saying we should not use the usual method in production but maybe sometimes we don't have enough memory or computational power in our own PCs. In that case, I hope this method can save you a lot of time.
Water Setup:
Above is the FLIP simulation I get from just setting the terrain right. I deleted the particles outside the camera frustum. It’s all basic FLIP sim.
After the FLIP simulation we have to split the simulation into two parts:
- The first part is to isolate the FLIP simulation only to the height we want.
- The second part is all the particles which have a higher velocity and a distance quite apart from each other.
In the above GIF, we can see the separation between the base flow layer (Green) and the splash (Red). Basically, we are separating the splashes with the water flow. Once we have our base water flow separated, we can lay out a grid with enough resolution and size within the bounds of our FLIP simulation.
Once we have that we can transfer attributes such as vorticity and velocity on the grid. Once we have this information on the grid, we can use them to deform the geometry. Now, I don't want to bore all readers with a math explanation, so here is the vex code I used to deform the grid:
As for the splashes, we can just go ahead and process them in a traditional VDB conversion method. Later we can merge and blend the splash and the deformed grid by using a light position transfer.
Here is a screenshot of both splash mesh (Red) and deformed grid (Green) merged.
We can now smooth the final mesh and cache it out. This will give us a mesh that will have a deformed grid as a base flow with some meshes being the splash and it’ll save a lot of time in processing the material for rendering later in Arnold.
Setting Up the Leaves
The Meshing Process we discussed in the previous chapter actually makes setting up the leaves a bit easier. The challenge here would be to set up a Scattering System that doesn't kill the hardware before and while the simulation.
So, to set it up we first need some primitives with leaf shader. I found some online but we can also prepare them however we want. Just make sure that they're not overkilling the mesh. I'd recommend having a 2D grid with a maximum of 8 divisions. Once we have that we have to set up the scattering in a way that:
- In the first frame, it emits all over the water deformed grid as we want our first frame to have.
- In the next frames, we can scatter random mesh outside the camera frustum to keep the leaves being generated and flowing in view once we set up the Vellum solver later.
Once we have the scattering, we can use our trusty and handy Vellum to take over. The trick here is to have the geometry stick to the surface of our water-deformed grid. To do so, a Velocity volume is prepared from the FLIP simulation and used as an advection force in the Vellum solver.
And to make the geometry stick to the deformed water grid, here's the VEX code used in a geometry wrangle with the second input being the water grid we cached out earlier:
Recreating the Golden Woman's Dance
First of all, I am not in any way or form an Animator. In fact, I am terrible at it. But my very thanks to the incredible team at Adobe to provide us with this vast library of mocap and characters at Mixamo. I downloaded different varieties of dancing animation and processed them in Houdini using KineFX, which I myself had never used before but surprisingly it is something that is super easy to get started and is very accessible. With a little bit of exploration, trials, and errors, I was able to combine some mocap data, and just like editing a video, I was able to compile an original dance.
As for the jewelry, the modeling part was all done by painstakingly placing individual curves using the Vellum drape and mostly done manually. This step saved me a lot from having anything to do later with the entire jewelry simulation. This one had an additional advection set up similar to the leaves to make the jewelry react to the water flow.
The real meshing was actually done post-simulation, as Vellum gives us some interesting attributes such as pScale, which I used to instance some spheres to give it a look while also not worrying about intersections.
But here's a fun fact! The small parts of the jewelry are not even simulated! To keep the longer jewelry from being inteviend too much from smaller ones, I set up a procedural system to deform the smaller parts, and here's how I did it! To demonstrate it I'll use a head with individual strands.
1. Use the point deform system to make the strand attached to the dancing body.
2. Calculate the velocity of the deformed strand.
3. Calculate a CurveU attribute starting from the tip to the end. Tip being 0 and end being 1.
4. Now we can add the position of each strand with the inverse of the velocity which is calculated from body movement and multiply the results with the CurveU value. This will make a gradient transition of position from start to finish making it look like a strand reacting to the motion.
5. Ray the results back to the body using the closest point ray and lift the strands again a bit towards the normals to keep them from intersecting.
And here you go! You can use this algorithm for all the strands in your systems to emulate the movement of small jewelry.
Rendering and Lighting
Once I had all the elements in Houdini, it was very important to clean up all the geometry before we cache out everything into alembics. By cleanup, I mean deleting every attributes that I don't need and making sure we have all the attributes in the correct context.
Because I knew I would be rendering in Arnold, I made sure to promote all the attributes I need to the vertex context. Please note that the attributes need to be in the RGB (for example V.r, V.g, Vb) format, not in any other array. Only then the utility data will work in Arnold. And Maya always reads attributes from vertices.
Once all the clean-up was done, I simply cached everything out as an alembic, imported everything in a new Maya scene, and set up the shaders using the aiUserDataColor utility. I won't go in-depth on how to use aiUserDataColor as there are a lot of good tutorials online.
For motion blur, I also baked the velocity data in Houdini to the Cd attribute as the motion blur from alembics when the mesh is inconsistent is not possible. We can simply replace the Motion Vector source for any mesh as shown in the picture below, just make sure that for every mesh, in the export setting under in attribute editor, we have the export vertex colors enabled.
The lighting was done using a 3-point lighting setup with the key light exposing more to the back side as it enhances the silhouette of the character.
Additional lighting specifically in the reflective area was also added to get some shiny details which were later enhanced in the compositing stage.
Water had a similar 3-point lighting setup (exactly the same position and orientation) but the exposures were manipulated to make it less apparent in the reflections.
For the camera, animation I just stick with the original camera I set during the previz and rendered everything using lower than default settings except the camera AA set to 4 and motion blur enabled. Everything was rendered as a single layer with one additional crypto material pass for color correction in composting.
Conclusion
The entire project took 12 days from setting up the animation to rendering out and compositing. The main challenges were to set up a reliable system for optimizing everything and not spend any money on outsourcing at any stage. Cumulatively, it all went down as a challenge of planning and knowing the prerequisites.
My tip to beginners would be not to listen to any tips, always listen to yourself! Learn what interests you and you’ll excel!