Search this blog

15 May, 2019

Seeing the whole Physically-Based picture.

Subtitle: Building our rendering on solidly shaky grounds.

Physically-Based Rendering has won. There is no question about it, after an initial period of reluctance, even artists have been converted and I don't think you can find many rendering systems nowadays, either offline or real-time, that hasn't embraced PBR. And PBR proved itself to even be able to adapt to multiple art styles, outside strict adherence to photorealism.

But, really, how much physics is there in our PBR renderers? Let's have a look.

- Optics and Photometry.


Starting from the top, we have to define our physical framework. Physics are models, made to "fit" reality in order to make predictions. Different models are appropriate for different contexts and problems. For rendering, we work with a framework called "geometrical optics".

In G.O. light is composed of multiple frequencies which are assumed to be independent. Light travels in straight lines (in homogeneous media). It changes direction at changes of media (changes of IOR), where it can be absorbed, reflected or refracted. It travels instantaneously and it follows the path of least time (Fermat's principle). 

Is this a good framework? It's already making a lot of assumptions, and we know it cannot model all light behavior even when it comes to things that are easily visible: diffraction, interference, fluorescence, phosphorescence. But we say that these phenomena are not that common in everyday materials, and we might be right.

That's not all though, even before we start rendering our first triangle, we make more assumptions. First, we define a color space, usually a trichromatic one, because of the visual system metamerism. Fine, but we know that's not correct for rendering. We know spectral rendering has in even sometimes dramatically different results, but we trust our artists can tune lighting, materials, and post-processing in the right way (even if the two things shouldn't be related) to generate nice images even if we restrict ourselves to RGB. Or at least, we hope.

- Scattering


Next, we have to define what happens when the light "hits" something (an IOR discontinuity). Well, who knows, light is really hard! Some electrons... resonate? Get polarized? Please let it not be something to do with quantum stuff... Anyhow, eventually they scatter some energy back... waves? particles? There is some interference at around the atomic level. Who knows, luckily, we have another framework that comes to rescue: microfacet theory.

Surfaces are made of microfacets, like a microscopic landscape, light rays hit, bounce around and eventually come out. If we integrate the behavior of said microfacets over a small area, we can compute a scattering probability (BRDF) from the distribution of the microfacets themselves and a lot of math and voila', rendering happens.

Over a small area? How small, by the way? Well, Naty Hoffman and Eric Heitz say around the order of magnitude of the projected area of a pixel. I say, around the order of magnitude of a light wavelength, and then the projected area thing is antialiasing applied "after". So probably it's the pixel thing that's right.

What are these microfacets made of? Ideal reflectors obeying only the Fresnel law for how much light is reflected and how much refracted. The refracted part gets into the material (for dielectrics, that somehow allow this behavior), scatters some more and eventually comes out. If it comes out still "near enough" we call that "diffuse" reflection.
Otherwise, we call that subsurface scattering. But how does the light scatter inside the material? It hits particles. Microflakes? But microfacet based diffuse models (e.g. Oren-Nayar) simply swap the facets from ideal reflectors to ideal diffusers (Lambert)...

Regardless. We know all these things! We have blog posts, Siggraph talks, and books. Physics... And this still is well in that "geometrical optics" framework. Rays of light hit things. So much so that we can create raytracers to brute-force these microscopic interactions and create our own BRDFs!

But, it is still reasonable to use geometrical optics for these interactions? They seem to be quite... small. Maybe diffraction now matters? It turns out, it does, and it's a well-known thing (if you read the papers from the sixties... Beckmann-Spizzichino), but we sweep it under the rug.

And well, we can't really derive the equations from the microfacets, that integral is itself hard, so the BRDFs that we use introduce, typically, a bunch of other assumptions. For example, they might disregard the fact that light can bounce around multiple times before "coming out".

But who cares, nice pictures can be generated with this theory, and that's what matters. Moreover, someone did try to fit the resulting equations to real-world materials, right? The MERL database? I wonder how much error there is in that. Or how much it samples "well" real-world materials. Or how perceptual is the error metric used in estimating the error... Better to not think too much.

- Fiat Lux!


Are we done now? Far from it! In practice, we cannot just use the BRDF and brute-force light rays, not for real-time rendering, we're not Arnold. We need to compute a few more integrals!

We need to integrate over the light source, and over the surface area that is "seen" by the pixel, we're considering (pixel footprint). And that is incredibly hard, so hard we don't even try before having introduced a bunch more assumptions and approximations.

First of all, when we talk about pixel footprint, we really mean that we consider some statistics of the surface normals. We don't consider the fact that, for example, the "view rays" themselves change (and the light ones too), or that the surface normals don't really exist as an entity separate from actual surface geometry (which would cause shadowing and all other fun things). We assume these effects to be small.

Then, when we talk about light, we mostly mean simple geometric shapes that emit light from their surface. For example, a sphere. At each point, the light is emitted equally in all directions, and most often, it's also emitted with the same intensity over the surface.

And even then it's not enough to compute everything in closed-form. In fact, the more complex the light is, typically, the more approximated the BRDF will become. And then we'll fit these approximated BRDFs to the "real" one, and sum everything up. And sprinkle some of that pixel footprint thing on top somehow as well, but really that's done once on the "real" BRDF, even if we never actually use that!

So we have an approximation for very small lights, and maybe a good one for spheres, one for lines and capsules with some more handwaving, even more for polygonal lights, especially if textured and lastly one for far, "environment" light... We have approximations for "diffuse" and for "specular", for each of these. And maybe for static versus dynamic objects? A lot of math and different approximations.

We compare them and make sure that more-or-less the material looks the same under different kinds of light, and call it a day... The most ambitious of us might even export a scene to a path-tracer and compute some sort of ground-truth, to visually make sure things are at least reasonable...

- We're done, right?


So... we get our final image. In all its Physically Based, 60fps, HDR glory! Spectacular.

Year after year people come up with better equations, tighter approximations, and we make shinier pixels as a result.

Is that all? Of course not! We are just getting started! 

In practice, materials are not just one surface... They can have layers! And they are never optically uniform! They sparkle! They are anisotropic, they have scratches. Really, look around, look at most things. Most things are sparkly and anisotropic, due to the way they are fabricated.

And nothing is a surface, really. It's mostly volume and particles. Even... air! So we need fog and volumetric models. But that's not just about the light that scatters in the air back to our virtual cameras, we should also consider how this scattering affects lighting at surfaces. Our rays of light are not that straight anymore! Participating volumes make our light sources more "diffuse". Bigger. All of them, also things like environment lighting! And... that should affect shadows too right?

And now that we think about shadows... all this complexity and unknowns are still only for what we call "direct" lighting! What about global illumination? What about the million other hacks and assumptions that we rely upon to render each or our frames?

- Conclusions

So. How much physics is there in a frame, really? And more importantly, what's the point of all this? Should we be ashamed of not knowing physics that well? Should we do physics more? Less?



I don't know. I personally do not know physics well and I'm not too ashamed. A lot of what we've been doing is reasonable, really. We went with GGX because its "tail" helps. All the lighting improvements served our products. All the assumptions, individually, looked reasonable.

But, there is a value I think in looking at our math and our approximations holistically, now that we are getting so good at photorealism.

Perhaps there is not too much value for example in going off the deep end of complexity when we think of BRDFs, if we can't then integrate them with complex lighting, or, in order to do so, we have to approximate them again.

Similarly, the features we focus on should be evaluated. Is it more important to have non-uniform emission in our light sources, or a different "tail" in GGX/T-R? Anisotropic surfaces or sparkles? Spectral sampling? Thin-film? Non-lambertian diffuse? Of which kind? Accurate energy conservation or multi-bounce in microfacets?

Is it better to use the best possible approximation for a given integral, even if we end up with many different ones, or should we just use a bunch of spherical gaussians, or LTCs and such, but keep the same representation everywhere? And in general, is most of our error still in the materials, or in the lights? This is very hard to tell from just looking at artist-made pictures, because artists will compensate any way they can!

But even more importantly - How much can we keep relying on simplifying assumptions in order to make our math work?

I suspect to answer these questions we'll need more data. Acquire from the real world. Brute force solutions. Then look at the data and understand what matters, what matters perceptually, what errors are we committing end-to-end, and what we should approximate better and how...

And we should not assume that because somewhere we have a bit of physics, we are doing things correctly. We are, after all, a field that forgot for decades basic things like color spaces and gamma.

NumPy by Example

This originally was in my Scientific Python 101 article, I've split it now as it was a long article and sometimes I need just to have a look at this code as a reminder of how things work.
If you're interested in a similar "by example" introduction for Mathematica, check this other one out.

Execute each section in an IPython cell (copy and paste, then shift-enter)

# Set-up the environment:
# First, we'll have to import NumPy and SciPy packages.
# IPython has "magics" (macros) to help common tasks like this:
%pylab
# This will avoid scientific notation when printing, see also %precision
np.set_printoptions(suppress = True) 
# Optional: change the way division works:
#from __future__ import division 
# 1/2 = 0.5 instead of integer division... problem is that then 2/2 would be float as well if we enable this, so, beware.

# In IPython notebook, cell execution is shift-enter
# In IPython-QT enter evaluates and control-enter is used for multiline input
# Tab is used for completion, shift-tab after a function name shows its help
# Note that IPython will display the output of the LAST -expression- in a cell
# but variable assignments in Python are NOT expressions, so they will
# suppress output, unlike Matlab or Mathematica. You will need to add print
# statements after the assignments in the following examples to see the output
data = [[1,2],[3,4],[5,6]]
# if you want to suppress output in the rare cases it's generated, use ;
data; # try without the ; to see the difference

# There is a magic for help as well
%quickref
# Also, you can use ? and ?? for details on a symbol
get_ipython().magic(u'pinfo %pylab')

# In addition to the Python built-in functions help() and dir(symbol)
# Other important magics are: %reset, %reset_selective, %whos
# %prun (profile), %pdb (debug), %run, %edit

1*3 # evaluate this in a separate cell...

_*3 # ...now _ refers to the output of last evaluated cell
# again, not that useful because you can't refer to the previous expression
# evaluated inside a cell, just to the previous evaluation of an entire cell

# A numpy array can be created from a homogeneous list-like object
arr = np.array(data)
# Or an uninitialized one can be created by specifying its shape
arr = np.ndarray((3,3))
# There are also many other "constructors"
arr = np.identity(5)
arr = np.zeros((4,5))
arr = np.ones((4,))
arr = np.linspace(2,3) # see also arange and logspace
arr = np.random.random((4,4))

# Arrays can also be created from functions
arr = np.fromfunction((lambda x: x*x), shape = (10,))
# ...or by parsing a string
arr = np.fromstring('1, 2', dtype=int, sep=',')

# Arrays are assigned by reference
arr2 = arr
# To create a copy, use copy
arr2 = arr.copy()

# An array shape is a descriptor, it can be changed with no copy
arr = np.zeros((4,4))
arr = arr.reshape((2,8)) # returns the same data in a new view
# numpy also supports matrices, which are arrays contrained to be 2d
mat = np.asmatrix(arr)
# Not all operations avoid copies, flatten creates a copy
arr.flatten()
# While ravel doesn't
arr = np.zeros((4,4))
arr.ravel()

# By default numpy arrays are created in C order (row-major), but 
# you can arrange them in fortran order as well on creation,
# or make a fortran-order copy of an existing array
arr = np.asfortranarray(arr)
# Data is always contiguously packed in memory

# Arrays can be indexed as with python lists/tuples
arr = np.zeros((4,4))
arr[0][0]
# Or with a multidimensional index
arr[0,0]
# Negative indices start from the end
arr[-1,-1] # same as arr[3,3] for this array

# Or, like matlab, via splicing of a range: start:end
arr = arange(10)
arr[1:3] # elements 1,2
arr[:3] # 0,1,2
arr[5:] # 5,6,7,8,9
arr[5:-1] # 5,6,7,8
arr[0:4:2] # step 2: 0,3
arr = arr.reshape(2,5)
arr[0,:] # first row
arr[:,0] # first column

# Also, splicing works on a list of indices (see also choose)
arr = arr.reshape(10)
arr[[1,3,5]]

# and with a numpy array of bools (see also where)
arr=np.array([1,2,3])
arr2=np.array([0,3,2])
arr[arr > arr2]

# flat returns an 1D-iterator
arr = arr.reshape(2,5)
arr.flat[3] # same as arr.reshape(arr.size)[3]

# Core operations on arrays are "ufunc"tions, element-wise
# vectorized operations
arr = arange(0,5)
arr2 = arange(5,10)
arr + arr2

# Operations on arrays of different shapes follow "broadcasting rules"
arr = np.array([[0,1],[2,3]]) # shape (2,2)
arr2 = np.array([1,1]) # shape (1,)
# If we do arr+arr2 arr2.ndim
# an input can be used if shape in all dimensions sizes, or if 
# the dimensions that don't match have size 1 in its shape
arr + arr2 # arr2 added to each row of arr!

# Broadcasting also works for assignment:
arr[...] = arr2 # [[1,1],[1,1]] note the [...] to access all contents
arr2 = arr # without [...] we just say arr2 refers to the same object as arr
arr[1,:] = 0 # This now is [[1,1],[0,0]]
# flat can be used with broadcasting too
arr.flat = 3 # [[3,3],[3,3]]
arr.flat[[1,3]] = 2 # [[3,2],[2,3]]

# broadcast "previews" broadcasting results
np.broadcast(np.array([1,2]), np.array([[1,2],[3,4]])).shape

# It's possible to manually add ones in the shape using newaxis
# See also: expand_dims
print(arr[np.newaxis(),:,:].shape) # (1,2,2)
print(arr[:,np.newaxis(),:].shape) # (2,1,2)

# There are many ways to generate list of indices as well
arr = arange(5) # 0,1,2,3,4
arr[np.nonzero(arr)] += 2 # 0,3,4,5,6
arr = np.identity(3)
arr[np.diag_indices(3)] = 0 # diagonal elements, same as np.diag(arg.shape[0])
arr[np.tril_indices(3)] = 1 # lower triangle elements
arr[np.unravel_index(5,(3,3))] = 2 # returns an index given the flattened index and a shape

# Iteration over arrays can be done with for loops and indices
# Oterating over single elements is of course slower than native
# numpy operators. Prefer vector operations with splicing and masking
# Cython, Numba, Weave or Numexpr can be used when performance matters.
arr = np.arange(10)
for idx in range(arr.size):
    print idx
# For multidimensiona arrays there are indices and iterators
arr = np.identity(3)
for idx in np.ndindex(arr.shape):
    print arr[idx]
for idx, val in np.ndenumerate(arr):
    print idx, val # assigning to val won't change arr
for val in arr.flat:
    print val # assigning to val won't change arr
for val in np.nditer(arr):
    print val # same as before
for val in np.nditer(arr, op_flags=['readwrite']):
    val[...] += 1 # this changes arr

# Vector and Matrix multiplication are done with dot
arr = np.array([1,2,3])
arr2 = np.identity(3)
np.dot(arr2, arr)
# Or by using the matrix object, note that 1d vectors
# are interpreted as rows when "seen" by a matrix object
np.asmatrix(arr2) * np.asmatrix(arr).transpose()

# Comparisons are also element-wise and generate masks
arr2 = np.array([2,0,0])
print (arr2 > arr)
# Branching can be done checking predicates for any true or all true
if (arr2 > arr).any():
    print "at least one greater"

# Mapping a function over an array should NOT be done w/comprehensions
arr = np.arange(5)
[x*2 for x in arr] # this will return a list, not an array
# Instead use apply_along_axis, axis 0 is rows
np.apply_along_axis((lambda x: x*2), 0, arr)
# apply_along_axis is equivalent to a python loop, for simple expressions like the above, it's much slower than broadcasting
# It's also possible to vectorize python functions, but they won't execute faster
def test(x):
    return x*2
testV = np.vectorize(test)
testV(arr)

# Scipy adds a wealth of numerical analysis functions, it's simple
# so I won't write about it.
# Matplotlib (replicates Matlab plotting) is worth having a quick look.
# IPython supports matplotlib integration and can display plots inline
%matplotlib inline
# Unfortunately if you chose the inline backend you will lose the ability
# of interacting with the plots (zooming, panning...)
# Use a separate backend like %matplotlib qt if interaction is needed

# Simple plots are simple
test = np.linspace(0, 2*pi)
plt.plot(test, np.sin(test)) # %pylab imports matplotib too
plt.show() # IPython will show automatically a plot in a cell, show() starts a new plot
plt.plot(test, np.cos(test)) # a separate plot
plt.plot(test, test) # this will be part of the cos plot
#plt.show()

# Multiple plots can also be done directly with a single plot statement
test = np.arange(0., 5., 0.1)
# Notes that ** is exponentiation. 
# Styles in strings: red squares and blue triangles
plt.plot(test, test**2, 'rs', test, test**3, 'b^')
plt.show()
# It's also possible to do multiple plots in a grid with subplot
plt.subplot(211)
plt.plot(test, np.sin(test))
plt.subplot(212)
plt.plot(test, np.cos(test), 'r--')
#plt.show()

# Matplotlib plots use a hierarchy of objects you can edit to
# craft the final image. There are also global objects that are
# used if you don't specify any. This is the same as before
# using separate objects instead of the global ones
fig = plt.figure()
axes1 = fig.add_subplot(211)
plt.plot(test, np.sin(test), axes = axes1)
axes2 = fig.add_subplot(212)
plt.plot(test, np.cos(test), 'r--', axes = axes2)
#fig.show()

# All components that do rendering are called artists
# Figure and Axes are container artists
# Line2D, Rectangle, Text, AxesImage and so on are primitive artists
# Top-level commands like plot generate primitives to create a graphic

# Matplotlib is extensible via toolkits. Toolkits are very important.
# For example mplot3d is a toolkit that enables 3d drawing:
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
points = np.random.random((100,3))
# note that scatter wants separate x,y,z arrays
ax.scatter(points[:,0], points[:,1], points[:,2])
#fig.show()

# Matplotlib is quite extensive (and ugly) and can do much more
# It can do animations, but it's very painful, by changing the
# data artists use after having generated them. Not advised.
# What I end up doing is to generate a sequence on PNG from
# matplotlib and play them in sequence. In general MPL is very slow.

# If you want to just plot functions, instead of discrete data,
# sympy plots are a usable alternative to manual sampling:
from sympy import symbols as sym
from sympy import plotting as splt
from sympy import sin
x = sym('x')
splt.plot(sin(x), (x, 0, 2*pi))