logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Working with Niagara Fluids to Create Water Simulations

Senior Tech Artist at Epic Games Asher Zhu shared an enormous breakdown of the workflow in Unreal's Niagara Fluids, explained how to set up the water surface and whitewater particles, spoke about making the water simulation's color scientifically accurate, and shared some snippets of code.

Introduction

Greetings adventurers! I’m Asher Zhu, currently Senior Tech Artist at Epic Games, making video game technology that appeared in my dreams. I like blowing things up and passing you the TNT formula.

Here is a most recent Elden Ring fanart VFX I made:

I was in indie. Making cool games has always been my favorite game. And I’m sure a lot of you feel the same. During the dark nights, I coded, designed, and made VFX. I was proud of them. And they got me into Epic.

Since then, I've been stubborn to make every effect I created a mini-game on its own. Some gained popularity among gamers and developers alike. It's been an awesome experience to spread joy and inspiration this way while working for a big company.

Niagara Fluids

For starters, Niagara Fluids includes templates for fire, smoke, pools of water, splashes, and shallow water. It’s UE5’s answer to fluid simulation. Niagara is the most robust, artist-friendly GPU programming framework for video games (or for anything, really). Niagara Fluids is basically a DLC that makes fluid-related stuff much easier.

The tool's main features:

Simulation:

  1. FLIP solver for water (2D and 3D)
  2. Shallow water implementation
  3. Gas simulation for both 2D and 3D grid
  4. A series of Niagara System showcases

Rendering:

  1. SDF/jumpflood renderer
  2. Material to represent SDF using Single Layer Water
  3. (coming soon) Sphere rasterizer that supplements SDF
  4. Lighting injection Interfaces for gas

Gameplay/Interaction:

  1. 3D Collision Interfaces
  2. 2.5D Collision (affect 2D sims with 3D world object)
  3. Character interaction

And more! It’s critical to download Content Example and open the Niagara_Fluids map yourself. You can wander through the workflow demos and see what’s available. All Content Examples maps are constantly updated to reflect UE’s latest features.

The Fluid Simulation Experiment

Engineered art and artistic tools have always been my curiosity, and I’ve done many experiments to see how far I could go. There were plenty of skill trees I had to master before I could unlock the fluid simulation skill tree.

It’s a long story, but I’d like to mention a couple of toys I made that helped me understand volumetric effects. Firstly, clouds! Stacking 3D noises together to create beautiful clouds was extremely satisfying. Painting them was just a very natural next step. The process trained me to visually feel the fluffy and crispy shapes of 3D noises. It’s very similar to water splashes, foam, and bubbles, in terms of techniques and mood. Here is my breakdown article if you are interested.

After that, the uncharted domain of FluidSim (for artists) caught my attention. Niagara Sim Stage was rapidly maturing at that time and I gave it a go. I read papers. I got totally lost after a couple of pages. But I kept pushing. In a couple of months, my system started to take shape. To this day I still can’t believe it worked.

Eventually, my OC character Barrelhead was born.

Around that time I had to create most of the modules and materials from scratch. SPH solver, rasterizer, collision. There weren't good ways to communicate with Secondary Emitters(more of this below) so I had to ‘morph’ a small percent of SPH particles into splash sprites.

With the advancement of Niagara, now we have robust pre-made alternative modules, with showcases to help you learn and make well-informed decisions. I think 2022 is a really good year to get into it.

If you are new to procedural VFX in general, check out the base knowledge first. From there, your second stop will be Epic’s Learning Library, search ‘"Niagara" and give it a go. There are dozens of extremely well-constructed examples crafted by our engine dev team, tech writers, and evangelists. A lot of questions you have, and a lot of questions you don’t know you should have, will be answered while learning to replicate these cool toys.

As for the FluidSim learning path, I personally recommend brute-forcing the Content Examples. Problem-solving is the best way to learn.

Deconstructing and reconstructing demo assets in this order has worked for me the best:

  1. Check out all the examples play with the parameters. Check out the official Niagara Fluid intro videos.
  2. Stripe out the renderer ‘beauty’ components and everything not essential to the simulation
    Leave out a minimal ‘barely working’ system
  3. Dissect further with the assistance of Debug Tools and Attribute Spreadsheet
  4. Learn to create new custom modules to make stuff happen

Setting Up the System

Simulation

We start with the Simulation part, all the 3D water simulation in Niagara uses a PIC/FLIP module. As mentioned above, you can find these examples in the Niagara_Fluids map:

From a high-level overview, there isn't too much to tweak here – which is good. Because water is water, you don’t typically want different kinds of water.

To get a feel of the parameters, a brief explanation of stuff that matters:

  • Collision Velocity Mult: Used for collision interaction. For example, consider pouring water out of a bowl. If this is 0, you can’t pour the water no matter how fast you try. Water will simply flow down with gravity.
  • Geometry Collection Collisions: Works with Chaos fracture asset! Still WiP though.
  • Static Mesh Collision: It samples individual static mesh distance field (NOT global distance field). You have position, normal, and velocity for the nearest surface point at your disposal. It also doesn’t require global DF generation so won’t affect Niagara System tick order. Aka can be used with Opaque materials without 1 frame delay.
  • Num Cells Max Axis: Pick the longest axis of your bounding box. Divide the length with this number, you get your simulation voxel size. Just a convenient way to set & tweak resolution for everything.
  • Particles Per Cell: Utility parameter to fill a tank of water on sim start.
  • Physics Collisions: Character collision using Physics Asset DI, here's a great tutorial on the topic.
  • Pressure Iterations: We can bind a dynamic number to determine how many times we want a render stage to iterate. For water systems, this determines Solve Pressure sim stage iterations count.
  • PIC FLIP ratio: 0.0 - 100% PIC simulation, stable, less accurate. 1.0 - 100% FLIP simulation, accurate, less stable. A value in between – mix good things of both. Usually, 0.75 - 0.95 works well depending on your use case (e.g. fish tank or running river).

Collision

Static meshes are amazing. Their collision is accurate and we can pre-generate mesh distance field for them. With Geometry Cache collision still in an experimental state, Static Mesh collision is the best to stir interesting fluid behavior.

The Niagara_Fluids map river uses Static Mesh Collisions DI, which is what I’d recommend. This interface takes the Mesh Distance Field (not Global Distance Field) of all tagged static meshes. As a result, you get view-independent collision, normal and velocity reads. The downside is it gets heavier the more meshes you mark for collision.

An alternative is the Global Distance Field collision. Because global SDF is constantly generated as a whole in runtime, the cost is always the same. The downside is it’s view dependent. Your water may pop a little when the camera gets further/nearer. And it doesn’t support mesh velocity.

There are also other collision types for Landscape and Skeletal Mesh. 

Rendering

My river demo and all the Content Example 3D fluids use the Single Layer Water shading model. It’s basically a 3D box masked out to match the shape of the water body.

Because we know the water surface depth from SDF, we can ‘push’ the SLW material pixel onto the correct position using Pixel Depth Offset. Water surface normal is also extracted from SDF. With all these combined, we can render the volume of the water. But how do we get more art-directed elements to make the water prettier?

Working with Whitewater

For video game water, foam or ‘white water’ is often a generalized name for 3 parts: Splash, Surface foam, and Bubbles.

Note: Artistically and technically, all of these also more or less apply to 2D water simulation, shadow water, or even traditional mesh/flowmap-based water effects. Pick what’s useful to you.

Splash

In video games, water splash is almost always presented as flipbook sprites. For 2D water surfaces, we can decide where and when to spawn splashes using 3D geometry representation and water velocity. Consider boulders sitting in the middle of ocean waves, or the player interacting with a river – we can tell that, part of them is underwater simply by comparing their height with the water's surface height.

For 3D simulation, not much has changed for the 3D geometry representation (of things to collide with), however, we do need to refer to the Grid3D to find out where to spawn the sprites. That’s where the Secondary Emitter in NiagaraFluids plugin chimes in (you can find it in the examples).

The Secondary Emitter will check these conditions for all SimGrid voxel positions:

  • Distance Field - Is this point inside water?
  • Grid Velocity - Is water here moving fast enough?
  • Grid Vorticity - Is water here volatile enough?

If all are satisfied with one voxel, secondary particles will spawn at the position of that voxel (with a little jittering on top to smooth things out).

Water surface only:

WIth Splash sprites:

Sprites visualization:

Within the Secondary Emitter, you have nice control of how and when the sprites should spawn:

The secondary particles are rendered as flipbook sprites, the animation starts to play the moment they spawn. I prefer dithered Masked flipbooks for splash instead of Translucent because they are cheaper, can sort against each other, and can have pixel normal to react to environmental light. Translucent sprites can have pixel normal if you want, but when you have a lot of secondary particles, the pixel details tend to blur each other out.

Bubbles

Masked material writes Depth buffer, which also means it affects underwater light scattering for Single Layer Water material. Here is an exaggerated example that has the water darkened, so you can see the underwater sprites' color behavior more easily:

When the slash sprites are underwater, the scattering totally makes them look like bubbles. And I just used that, with a little opacity tweak. Of course, you can do more fancy tricks on pixel shader to make it look even more interesting.

Foam

Attention adventurers, for water foams, we are traveling to the “this will be in the next UE release” city. Mainly because we’ll need a few additional modules. Rasterization and DualRestPosition are two of them. 

For now, I’ll go over the stuff I did and provide code samples (attached at the end of this section). If you are eager, it’s a good opportunity to dive into the HLSL craziness.

For clarity, I use the SDF approach to render water surfaces because I like the look better. But all particle-carried attributes are extracted using rasterization (written as Niagara modules). See below for details.

Dual Rest Field

So, Dual Rest Position Field (sometimes it’s easier to refer to the result – advected textures) is similar to flowmaps. However, instead of predefined flow directions on a 2D texture, the direction data is carried on discrete particles and rasterized in real-time:

This, of course, is better explained in cat memes:

Rasterize Foam Intensity to ScreenSpace

But how do we know where the foam is, and communicate that to the material? Similar to the Secondary Emitter, we calculate Foam Intensity on each particle. And we rasterize it on a screenspace RT.

Foam Intensity RT in screenspace, visualized as a heatmap:

And we blur out the Foam Intensity RT using Render Target 2D – Mip Map Generation

Finally, we can multiply that advected foam texture (again, cat meme texture) with the foam intensity RT.

That still feels too ‘dry’ for some reason, like pouring baby powder into the river. There is only so much we can do on the surface. It lacks volume. That’s where the splashes come to the rescue:

They add an essential volumetric feel on top of the surface. The dithered splash also ‘smears’ the pixels between water and foam, which yields a really nice soft feel.

Now, both foam and surface detail normal texture is added to the water surface using this technique. Foam does drastic modifications to pixel basecolor, roughness, opacity, specular and normal, while detailed normal texture simply applies to water surface normal(where foam is absent). More on this later.

Foam Tips

Focus on using large advected foam texture for punch. Then add small foam texture for detail.

If you only focus on detail. You’ll get something like this. Looks nice but doesn’t feel natural. The flow feels forced.

In the surface material, keep foam intensity RT (from the simulation) smooth and untouched. Don’t tweak its contrast. A simple SmoothStep is enough. To create contrast, manipulate the advected foam textures (cat meme) instead.

I don’t have pictures for this one because it’s more of a feel. Basically, you want to art direct the foam texture, but not the physics. Foam intensity RT is physics.

Phase Function

I kind of brushed over the detail normal layer from the last session. The reason is once you understand the foam part, hooking up the detail normal texture would be trivial. Detail normal is great for still or slowly moving water surfaces. However, for chaotic running water, your mileage may vary. Personally, when using translucency with TAA/TSR, I found it hard to keep the fine details intact.

But it’s still an amazing layer of important detail. We can use it to interact with lighting. Firstly, the detail normal twists the underlying caustics in a nice way and ‘pushes’ the caustics forward:

Secondly, you may have noticed the sparks are a little more interesting and reactive as a result. This is because the sparks come from the reflection of the sun. And the reflection takes water normal, including detail normal as input.

Finally, another important technique is using phase function to add fake light scattering.

Directionality input of the phase function is taken from detail normal texture, and it’s an excellent way to add fidelity for volatile water, as well as a great opportunity to add some color variation. Notice the scattering light added in the right gif is a little greener than the watercolor.

Code Examples

RasterizeParticlesAsSpheres:

After we get the depth RT, we can do the sphere trace again to rasterize ExecIndex onto another RT.

WritingExecIndexByComparingSphereDepth:

for(int i = -RadiusIndexExtent.x; i <= RadiusIndexExtent.x; i++)
{
    for(int j = -RadiusIndexExtent.y; j<= RadiusIndexExtent.y; j++)
    {
        int2 CurIndex = ParticleIndex + int2(i, j);
        if(CurIndex.x >= 0 && CurIndex.y >= 0 && CurIndex.x < NumCellsX && CurIndex.y < NumCellsY)
        {
            float2 VectorFromCenter = ((float2)CurIndex + float2(.5f, .5f)) / float2(NumCellsX, NumCellsY) - ParticleUV;
            float2 OffsetWS = VectorFromCenter / RadiusInUV;
            // Treating camera as orthographical but shouldn't be noticeable since spheres are small?
            OffsetWS *= OffsetWS;
            float t = 1 - OffsetWS.x - OffsetWS.y;
            // Inside sphere mask
            if(t > 0)
            {
                float DepthOffset = sqrt(t) * RadiusWS;
                float OriginalValue;
                float ThisDepth = ParticleClip.w - DepthOffset;
                RasterGrid.GetFloatGridValue(CurIndex.x, CurIndex.y, 0, 0, OriginalValue);
                if(abs(ThisDepth - OriginalValue) < .1f)
                {
                    ExecIndexGrid.SetFloatValue(CurIndex.x, CurIndex.y, ExecIndex);
                }
            }
        }
    }
}

Yes, I’m doing the sphere tracking twice. It’s not the best but at the moment RasterGrid can’t carry additional attributes so we have to do some tricks. This process does have room to improve. Either way, to boost performance, it’s important to define a strategy to keep track of particles that are ‘too deep’ under the surface and cull them from the rasterization.

In the case of SPH sim, we have the luxury of knowing each particle’s neighbors. But for Eulerian simulations or Eulerian/Lagrangian hybrid simulations (e.g. FLIP), we don't know how many particles are nearby.

So how do we know which particles are ‘too deep’? For my demo, since SDF is the field we use to generate water surfaces, it contains the best information to make that call. Current frame SDF can be used to cull current frame particles from rasterization. Previous frame SDF can be used to cull current frame particles from SDF generation.

From there, it’s easy to extract any attribute you want from the particles.

Grid2D_VorticityFromExecIndex:

The rest of the code examples take too much space. So I uploaded it to my Discord server. You can download the zip here.

Dual Rest Position Field:

  • DualRestTimeline: Goes into Emitter Update
  • DualRestCapturePosition: Goes into Particle Update
  • rid2D_PixelDualRestPosition: As a Simulation Stage. This takes a ScreenSpace depth grid as input to calculate the offset between surface pixels and particle rest positions.

SDF raymarch:

Grid3D_RaymarchSDF: You can feed the rasterized sphere depth directly as input for Grid2D_PixelDualRestPosition. But it’s much nicer to have the SDF surface as input to get more accurate and smoother results.

The raymarched result can also be used directly for rendering (in material). The content example does the raymarching step inside the material.

Opacity and Colors

For colors, apart from the techniques already mentioned, it’s also important to understand the science behind the UE’s water surface material, namely the Absorption and Scattering Coefficient.

In short, as of 2022 all video games still use RGB values to represent color. Spectral rendering is a luxury we are yet to have.

The benefit is, for any light calculation, we only have 3 channels to worry about. In the case of water absorption, think of the primary colors – Red, Green, and Blue as three types of light energy we have in the game world. And when the light goes underwater, water will take some of the energy away from each primary color, before the light can hit our eyes (the camera). Obviously, water absorbs red more than blue. That’s why water is blue.

But how much is each primary color absorbed? In order to answer this question, first, let’s check Wikipedia for the wavelength of each RGB channel.

So, proximately:

  • Red: ~700nm
  • Green: ~550nm
  • Blue: ~450nm

No need to be too precise here. We are artists, we can do whatever we want. 

Next, let’s look up the water absorption coefficient:

This is more science than video games, but we only need to understand a small part of it.

So, for the red channel, the wavelength is around 700nm, and from the chart, we get the Absorbance coefficient = ~0.6/m.  This means when the light goes into the water, its Red energy will be reduced to a portion of 1/e after the light travels a distance of 1/0.6 = 1.667m. Similarly, Green energy will be reduced to 1/e after the light travels 1/0.05 = 20m. Blue energy will be reduced to 1/e after the light travels 1/0.005 = 200m.

And the math ends here, well done! Now we only need to fill the Absorbance values into Single Layer Water material.

There you go, if you got here you have nailed the most important part of making a beautiful water surface material. The water shading model will take water depth and handle the calculation of absorption.

I’d also recommend checking out Ryan Brucks’ “Water And Volumetrics - Inside Unreal” talk for a deeper dive into this topic:

For the FluidSim project, I kept the Scattering at almost zero because I’m putting another layer of foam on top of the water, and I wanted high contrast between the clear water and the foam layer. The foam material was made into a Material Attribute and blended with water surface material inside the same SLW material.

Last but not the least, caustics is an essential part. It adds details, and most importantly it gives you a way to control underwater brightness without messing with the water light response. As mentioned, the animated caustics patterns also play very nicely with refraction from moving waves.

What Should One Consider When Working With Niagara?

Apparently, when we talk about an ‘experimental’ tool the first thing that pops up in our head is – bugs! Or, it still can’t do the so, so obvious thing you want it to do yet.

Personally speaking, while it’s not production ready, it’s simply fun to jump in and learn how future video game magic will work. Professional-wise, it’s always a ping-pong situation between technology and creative space. Bugs are artificial forms of the unknown, if you don’t fiddle with the darkness, you’ll only be able to do the stuff that everyone has already done. The process of problem-solving and banging my head on the wall always led to a deeper understanding of what I wanted to do and how to turn that creative space into reality.

That being said, if you follow our question 3 here and the showcases, I believe you’d have a kinda smooth experience. The system is pretty robust for what we have already tested. Be aware of the cost, any FluidSim effect is likely going to cost a big chunk of your game’s render budget. Profile in Standalone play mode often and plan ahead.

The good news is all the modules are going to improve under the hood over time. So what costs you 12ms now has a good chance to cost much less in the future. But how much? We’ll only know after the change and do extensive profiling.

Regarding putting the new systems into your game, if your game is in pre-production or in early stages with enough time budget for R&D, It’s always good to push for more novelty. Set up scenes to profile simulation performance under different scalability settings (particle count, grid resolution, rendering, etc.). Focus on art-directed supplement techniques, instead of relying on simulated resolution. This will go a long way.

If your game is deep in production and you are wondering if the new fancy thing is good for the next milestone. Unless you are absolutely sure about what you are doing, and have an abundant margin for error, please don’t. The crunch is not worth it. So many unfamiliar and strange things can go wrong. All my friends, especially our beloved producers, will hate you and warn people about you. As a game company, we should know better than risk crunching our teams. This might sound harsh but I’ve seen so many horror stories.

Conclusion

Thank you for having me. I use Twitter mostly to promote my creations and techniques. It’s weird to think Twitter is widely adopted for academic purposes. Also, my ArtStation always has the best quality videos.

I do try to post everything on multiple platforms, pick your poison:

Asher Zhu, Senior Technical Artist at Epic Games

Interview conducted by Arti Burton

Join discussion

Comments 1

  • fredfortin6

    This article is just so well made, incredible stuffs. As usual, thanks for taking the time to share with us Asher!

    0

    fredfortin6

    ·2 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more