Q-Net Render

What Is This?

For my dissertation, my project was to implement a new method to be used in volumetric rendering. This method used neural networks to represent the data of the volume and then used 'Q-Nets' to itegrate that network along rays to find the optical depth along that ray.

Optical Depth

In the volumetric rendering I was concerned with, there were 2 phenomena I wanted to account for:

  • Absorption: when a photon hits a particle in the volume and is absorbed by that particle, stopping the photon’s transport through the volume.
  • Scattering: when light hits a particle in the volume and is reflected off into a new direction. We are only concerned with two cases of scattering:
    • Out-scattering: where light traveling in the direction of interest is scattered and begins traveling in a direction that will not contribute the luminance of interest.
    • In-scattering: where light travelling in other directions scatters into the direction of interest.

These phenomina determine the proportion of light that makes it from point a to point b. Out-scatterign and absorption are indistinguishable from the viewers perspective (it doens't matter which occured, just that some of the light is no longer going to reach point b).

The likelyhood of a photon being absorbed or scattered depends on how the total volume density the photon encounters along its path through the volume, this is the optical depth. If the density of the volume is homogeneous then this is the density of the cloud times the distance the photon would travel through the volume. If the volume is non-homogenous and can't be described with an integratable function, then numerical approaches need to be take to calculate this.

My method attempts to represent the density distribution within the volume as an integratable function using a neural network, allowing analytical integration to calculate the optical depth.

Q-Net

Neural networks with a single hidden layer with sigmoid activation and a single linear output node have been shown to be integratable. A Q-Net is a re-ordering of the equation for this integration to allow parts of the equation to be performed as another neural network.

Please Note: the shader implementation below uses the Q-Net re-ordering, but due to the limitations of GLSL (limited matrix sizes) doesn't use the matrix multiplications that speed up neural network evaluation.

My Method

My method operates in 4 steps (per ray):

  1. Find the ray intersection point with volume bounds and the length of that intersection.
  2. Transform the X-axis of the neural network to align with the ray direction and place the origin of neural network at the intersection point
  3. Slice the Y and Z axes of the neural network at 0
  4. Integrate the neural network (now only along the X-axis) from 0 to the length of the ray

This provides us with the optical depth along the ray within the function represented by the neural network.

The accuracy of this method to the example data depends heavily on the training of the neural network, the more accurate the network is, the better the reulst of the method will be.

Demo

Below is a shader implementation of this method written using ThreeJS, WebGL and GLSL. This shader simply calculated the optical depth along a ray and uses it to mix between black and white. You can use your mouse to interact with the demo: left click allows you to rotate the view; scroll zooms in and out; and right click lets you shift the view.

Note: there are some obvious bugs such as a corner of the bounds being constantly visible and noisy white dots however these will be fixed in time.

Dissertation Writeup and Demo