logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Interview with Oliver Pavicevic: VR Train & Deckard Render for Unity

Ricardo Teixeira from Amplify Creations talked to Oliver Pavicevic about his amazing VR Train scene and Deckard Render, a Unity tool developed by Oliver that helps simulate physical camera behavior akin to real-life cinematography.

Introduction

”Fake, it has to be a render!”

That was the general reaction of a considerable amount of Unity 3D users when they first saw the video, but what is “fake”? What’s “fake” when you’re exploring a new medium? Sometimes you simply have to try something different, and that’s exactly what we see here. The VR-aided approach may not seem new to some users but ask yourself: when did you last have the chance to actually be the cameraman and director inside a VR world, and to output that experience into a high-quality render?

Any developer will tell you that things “look different” in VR, it’s really something you must try for yourself. VR has the potential to be an unlimited spatial canvas where everything is permitted, and where new concepts can emerge if you immerse yourself and experiment.

Oliver Pavicevic recently got quite a lot of attention with his VR Train scene. A small scene, yet extremely realistic-looking, that left most people wondering if it was real-time. Little did they know that the answer was actually “no”, but also “yes, sort of”… Let’s dig into it!

At the core of this VR Train scene is Deckard Render, a rendering solution for Unity 3D that aims at simulating physical camera behavior akin to real-life cinematography, in regular real-time projects for high-quality video production. This is not an offline renderer such as Octane, nor is it a common frame renderer. It’s a novel solution that offers users “real” high-quality soft shadows, temporal and spatial aliasing for perfect filmic motion, quality bloom without shimmering, physical depth of field and filmic motion blur.

In practice, this means that you, as a Unity user, will be able to take advantage of Deckard without making any considerable changes to your project and workflow. It works with the Post Processing Stack and existing lighting setups; there’s no need for specific offline rendering configurations or the usually associated rendering times. Despite having to process each frame, Deckard is quite fast when it comes to rendering, as there are practically no additional lighting calculations at render time - it’s all based on your existing real-time content.

Full disclosure: I’ve actually had a chance to briefly meet Oliver in the past at Unite Amsterdam; Deckard Render also utilizes shaders made using one of the tools created by the company I’m at, Amplify Shader Editor. But we’re here to talk about the Deckard Render which is actually compatible with any Unity shader.

I’ve been meaning to pick his brain for a while now so this felt like the perfect time to sit down for a short talk and discuss what we have to look forward to. The transcript of the interview with Oliver was edited for the sake of clarity and length.

Career, Current Projects & Products at Unity Asset Store

Ricardo: You have quite the background Oliver, I found your bio extremely interesting! Not only did you have a chance to experiment with both visual arts and music, but you were also a VR pioneer way back in the ’90s. Not to mention your impressive resume when it comes to the many entities you’ve collaborated with, as a developer and as a lecturer. Before we go into Deckard Render and the recent VR Train scene, tell us a bit about yourself in your own words. What do you currently do, what’s your current passion and obsession?

Oliver: I’m currently dedicated to the development of my assets and visuals for Live events. At this moment I could say that I’m a little bit into experimenting with new tech, mostly analyzing new possibilities given by new generation of hardware devices, and updating/refreshing some old projects and experiments.

Ricardo: You have a couple of products at the Asset Store. What brought you over to Unity, and how has that experience been?

Oliver: I didn’t start as a programmer and my educational background is a little bit different. I come from the communication and visual design world, and I started programming back in the 2000s by using Virtools environment. Back then it was the state of the art technology for making VR and real-time graphics that used visual scripting and was pretty easy to use by artists; an incredible piece of software. At the time, we made a partnership with Virtools, and I worked mostly on developing applications and doing lecturing work. Then Virtools slowly died back in 2014, and it was time to move to another platform. Unity was the best solution for me because of a pretty easy workflow for complex projects, and most of all, really good support for VR hardware devices.

Ricardo: Tell me more about your current products and what they can do for users.

Oliver: The first product that I’ve made was VR Panorama Pro, a tool that captures 360 videos directly in Unity. I started working on this project for my own needs - I just needed a system that was able to capture stereoscopic 360 videos directly in Unity. Back then, there was almost no solution for authoring 360 videos, so when I showcased my results, many users were interested in using it. So, what was initially intended as a personal tool, soon became one of the most used tools for 360 video production.

The second tool that I developed was CScape City System. This one was also developed for my own needs. As I work a lot in a Live Events field, I was always in need of good cityscapes as a background for real-time projection graphics. So I’ve needed a way to make something that is optimized and can work in real-time, being realistic enough while not using too many resources. Big cityscapes are pretty hard to simulate as many textures are often used, which require heavy VRAM usage and many draw calls. So I made my CScape that is able to reduce memory usage and draw calls to a minimum by using some procedural shaders and compression techniques.

And lastly, I made Deckard Render.

Ricardo: Do you consider yourself more of an artist, programmer, or a jack-of-all-trades in VR development?

Oliver: I’m mostly a technical artist and probably jack-of-all-trades. The latter is seen by today’s standards and from a marketing point of view, as a sign of weakness. But I really enjoy trying to understand and experience all aspects of this job.

Development of Deckard Render

Ricardo: Based on your product name and demos, I’m going to go out on a limb and guess that you might be a Blade Runner fan. How did Deckard Render come about, was it out of necessity or was this something you’ve been planning for a while now?

Oliver: Well, I started doing my job as a fan of Sci-Fi... As a kid, I was really into Star Wars, Alien, and Blade Runner. Making models and working robots was my fashion. (laughs) My passion for movies and special effects also came from there.

Deckard Render, as all of my assets, was also made mostly due to a personal need. I always like to challenge myself, and one year ago I decided to make a few scenes from Blade Runner entirely in Unity. I wanted to match lighting and everything that comes with that filmic look. But I had some problems that are mostly always a problem when doing real-time graphics: aliasing issues and overall motion look. It was always as if the motion was wrong. So, I started working on my custom antialiasing and motion blur post-processing effect. And it grew into something else, as the approach that I was taking gave me the possibility to do other things on top of AA and DOF without any further impact on performance. And that means I could use some things as soft area lights, stabilize unstable image effects, do all kinds of multipass or volumetric shader effects without too much coding.

So, the name of Deckard comes from here: as a tribute to a work that inspired me to make this renderer.

Ricardo: Do films have a major influence on all your work? What genres, or specific movies, inspire you? Anything new you’re working on that we should look out for?

Oliver: I come from a movie and photography background, where I have accumulated a pretty big amount of technical knowledge. I started working with photography and video back in the '90s when we didn’t have digital technology and you had to rely mostly on practical effects. At the time, our generation didn’t have the internet, and it was pretty hard to find any information on how some things were done. So this involved a lot of guesswork and was mostly trying to reinvent the wheel. Also, we didn’t have access to devices as we do today. Today you can easily make audio, video and special effects with a phone and a simple laptop. So, my latest strivings are mostly directed into bringing that ‘something’ that I’ve been missing in analog/chemical film.

Train VR Scene Production

Ricardo: The Train VR scene caught a lot of people by surprise, including me. Not only was it well-executed, I think your artistic sensibility and attention to detail also played a major role in capturing people's attention. It was really interesting seeing users sticking around to see what you would add next.

Oliver: I started working on it by using some old drawings and photos when I was just a kid, then did some research on the internet and found some photographic material. This work was pretty personal, as I wanted to create a place that reminds me of my childhood. The idea was to make a place that I loved so much. I started by using some memories, old photos and some of my drawings with all the details that I saw at that age.

My main problem was that there was more than one version of that train (it was a train that connected Belgrade with Moscow) and they all had some elements that were different from the one that I know. So I’ve tried to filter things out based on my drawings. Of course, I also found some technical plans and train miniatures that were really useful to me, as they could help me get real proportions and measures.

But actually, my 3D train model isn’t as detailed as it might seem. It is a pretty simple scene in terms of polygon and texture count, but what makes it look realistic is probably the care that I’ve decided to take in lighting up a scene. There are lots of things that are happening in lighting and on how materials behave in terms of reflection, refraction, shadowing. I’m relying mostly on custom shaders for those needs, and many things are actually done procedurally in a shader or via scripting. For example, even if we can simulate dynamic cloth for curtains, it would require too much processing power to process forty or more curtains. So I’m using custom shaders that are tuned to do vertex displacement driven by some global variables. This means that I can drive curtain movement by animating parameters or feeding just an audio of a moving train. In this way, if the train bumps, curtains shaders can make curtains follow this movement based on audio peaks. I’m pretty obsessed with hardware optimizations as I always feel a need to be able to add more objects, more lights, more effects. I wanted my scene to work also in VR, and not just on a screen. Actually, the primary usage was a screen, and I only decided afterward to render it out with Deckard.

Ricardo: You’re mixing a few practical applications here, can you elaborate on what is rendered and what takes advantage of real-time input?

Oliver: Well, the final animation wasn’t rendered in real-time. The ratio of rendering time vs. real-time is about 1:30. This means that twenty seconds of real-time actually requires about ten minutes of rendering to video. But the tech and workflow behind it are real-time. So this means that I can use all shaders, image effects, and improve on them by adding cinematography and natural motion by using Deckard.

Ricardo: It seems that you’re combining existing real-time rendering techniques and VR hardware in a way that somewhat democratizes virtual filmmaking without overly lengthy rendering processes or expensive hardware requirements; something usually seen in costly pipelines until now. Was this one of the objectives from the get-go or a natural progression of the Deckard Renderer development process?

Oliver: Absolutely! You are right. When you work in production you need to find a way to optimize your workflow. One of Murphy's laws said that “if you want to find the best way to optimize a task, give that task to a lazy guy and he will find the most productive way to do it”. Of course, I’m not a lazy guy, but I’m just hyperactive, and I really always have a need to finish things fast and move to other tasks. Band all this by using a laptop instead of rendering farms. Then, there is always the productional value of being fast. One of the most important things in the production business that I always try to teach my students is: you will fail if you strive for perfection and forget your deadlines. Business value is all about getting the best possible result in the least amount of time possible. And it’s a sort of a balance between quality and production time.  So yes, I’ve made Deckard also for that reason - as I couldn’t get the same production times vs. quality with other tools. 

Ricardo: Is there anywhere else we should look for additional details? 

I have started making a series of videos on Youtube that should go into some details about what I did and what could be a workflow for this or similar type of projects. This is the first video of this series:

More About Deckard Render & How It Works

Ricardo: Do you get asked a lot about the difference between your renderer and something like Octane? Not a fair comparison by any means, it’s like comparing oranges to apples but it seems somewhat common. Rendering speed comes to mind as one of the main advantages but what do you think really sets it apart?

Oliver: Well, in a production, rendering speed is one of the most important things. I’m coming from a Live Events background, where production is always extremely fast-paced. Sometimes minutes can make a difference. Also, the TV world is similar. You need to have tools and hardware that can do renderings fast. In most cases, my Deckard Renderer renders a frame from 1-3 seconds in 4k. It is also noiseless. Also, it can be used with any shaders in Unity and those shaders can be extended for usage with Deckard while keeping compatibility with standard rendering in Unity. But it builds upon Unity and enhances it. For example, you can avoid most of the problems related to transparent surfaces or custom shaders.  It requires knowing the basics of Unity, as it doesn’t feature any GI, but it works amazingly well with Unity built-in real-time GI that doesn’t require too much time for baking lighting solutions.

One of the properties of Deckard is that it can be used on expanding custom shaders while keeping them compatible with real-time experience without any rendering overhead. With Deckard, it's pretty easy to add some multi-layering effects or simulate volumetric effects like multipass fur, without using complex multipass shaders. Actually, all of the multipass shaders can be expressed in a shader by using few conditional switches.

This means, for example, that you can add support for motion blur on scrolling shaders. But one of the most important features is that you can treat alpha transparencies by eliminating standard problems with transparent surfaces, like aliasing issues when dealing with cutout shaders, or  Screen space reflection, DOF, motion blur and, last but not least, Z ordering issues when using standard real-time graphics. One case is that with my renderer, you can use SSR or AO on transparent surfaces, something that can't be done in real-time graphics.

But there are some flows that require some fiddling around. One of the things that I am currently not managing, and I’m not even intending to, is the output of various rendering channels into textures. Most renderers can export normal maps, object and material ID’s, depth, vector fields, and material properties into separate image sequences. This for me isn’t of any interest, as most of those properties are based on logics that are opposite to Deckard. For example, you would want a depth channel to be able to do some DOF in a compositing application. But I don’t want to even think about that - because I’ve made my system to avoid this process. Image post-processing in compositing software still can’t get close to the quality of simulations of DOF and motion blur that Deckard can output. So, Deckard should be seen more as a final pass for video production. Or more like an output from a camera that then goes into color correction.

Ricardo: How does Deckard translate into production given its rendering speed and low workflow changes required? I would imagine that it makes it a good choice for TV and high-quality pre-viz.

Oliver: I’m using it for TV production. Also, I have included some really good Chroma Key effects, and I’m researching doing some more TV-centric things, like tracking while synchronizing animations to LTC timecode. This would mean having a possibility to match the camera moves with a 3D scene while using VR devices such as Oculus or Vive.

Ricardo: I personally think color grading is one of the most important tools we have at our disposal to take footage and real-time applications to a whole new level, and crucial to achieving that filmic look and feel. What other techniques does Deckard apply to achieve that look?

Oliver: I have to agree, color correction and lighting are key for a realistic look. And, in my opinion, many creators don’t weigh this aspect enough.

I have sampled some light responses from professional cameras like Arri, Blackmagic, Canon, and most used film emulsions. And I can assure you that there are important differences in how sensors (or film) treat image, and on how much realism (or what we associate with “realism”) they add. Also, a new version of Deckard is able to simulate the noise that comes from different cameras. By using these techniques it outputs footage that is easily interpreted by artists who are used to working on video color correction.

Ricardo: Looking forward, I can definitely see your renderer’s potential for film, TV, and scanned content visualization. What can we expect from Deckard Render in the future?

Oliver: I’m mostly working on some optimizations and a more interactive workflow. I want to include some more specific shaders and examples that can be used with Deckard to improve its quality.

Ricardo: Can we expect additional VR-related controllers such as “virtual cameras”, or additional tracker use for virtual cinematography, or will it focus solely on the rendering side of things?

Oliver: Yes, I’m currently working on some of those simulations. The main idea is that you take your virtual camera, with a headset on, and you record all the movement while being in VR. This was the approach that I used for filming my Train scene, but at that moment, it was a pretty complicated setup. I’m working on making it simple and easy to use. Sometimes, it’s easy to make a tool that does a job, but it’s much harder to make it work in a simple and understandable way.

Future Plans

Ricardo: Any new products on the horizon?

Oliver: As I’m a single development ‘team’, I find myself in a situation where I have limited resources for support, marketing and development, and most of all I want to be sure that I can give all necessary support to my customers. And this means that I don’t want to put out products that don't have user interfaces that are simple enough to be self-explanatory. I would really like to come back to my project of NordLake water system (which I’ve made available as a free system in the meantime).  But I think that I will wait until Unity finishes its implementation of the HDRP Rendering Pipeline.

Read more about the water system here:

Ricardo: Do you work on your Unity products exclusively or do you also provide development services? If so, where can companies reach you?

Oliver: Yes, I’m mostly working as a freelancer. They can always contact me via my site.

Experience with Looking Glass

Ricardo: One final question: Having personally experimented with it, what do you think of the Looking Glass display and its future implications? Are the Deckard-exported lightfield renders specific to that display or can they be used with other players?

Oliver: Looking Glass is really fun to use. I must admit that the last time that I had so much fun was when I got my first Oculus DK1. The tech is still pretty primitive, but it’s a good proof of concept and I can see its future potential. Implementing a lightfield support for Deckard was pretty straightforward as the tech that Deckard uses is conceptually pretty similar to lightfield rendering. As for using lightfields without lightfield display - it can be used in many ways in classic gaming or in VR applications. Due to its autostereoscopic property, it can be used in 2D scroll games, or in a VR as a replacement for 3d sprites. Right now, we are using mostly 3D billboards when having to deal with grass and plants. Lightfield textures, on the other hand, can help by adding volumetric information at a pretty low-performance cost. They can also be used to replace complex background 3d models in VR or in scenes with variable POV.

Closing Note

For additional details, be sure to visit Oliver’s Blog and his Unity Asset Store page:

Oliver Pavicevic, Graphics Programmer

Interview conducted by Ricardo Teixeira

Keep reading

You may find this article interesting

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more