Search this blog

10 May, 2008

The rendering equation in the real(time) world

So, I've just submitted the first draft of my ShaderX article (and I don't like it), I'm watching "a beautiful mind" and I've been banned from facebook, tomorrow I don't have to work (Victoria day or something like that) so it seemed to be the perfect time for writing this post...

My motivation in writing this article is because I do believe that we should root our work in the correct maths, and in the correct physics. That's not because hacks do not have a value, they do. It's because if we don't know what we're doing, we're giving artists models that simply can't achieve the result they're aiming for, or that are too complicated. In the end everything is about that, we provide models, artists optimize parameters to fit our parametric models to a given look. Unfortunately, providing the "right" parameters is not as easy as in the context of automatic optimization, we strieve not only for a minimun number of orthogonal ones, but for something that is actually understoodable by humans. Still doing things wrong surely won't help, as the whole linear lighting thing demonstrated.

All the work we do as rendering enginners is rooted in one, single equation that describes how light propagates in the world, under a simple model of light that's well enough for our visualization purposes (geometric optics). This equation is appropriately called the rendering equation, and was first derived by Jim Kajiya (here).

I won't delve much into the details (a very good read is the Eric Veach
thesis). To make that long story short(er) (and more suitable to our needs), shading a visible point on a surface amounts to computing how much light that surface scatters from the light sources back to our camera plane.
That amount is the product of three functions:
  1. The local lighting model, that's a function that depends on the incoming ray of light direction and the outgoing direction we're considering, it usually varies from point to point on a surface (by changing the input parameters to a model for a given material, i.e. using texturemaps). This function is called the BRDF (bidirectional reflectance distribution). Note: for convenience, in the following I'll assume in the lighting model also the integration of the cosine angle term of the rendering equation.
  2. The visibility function, in other words, for each considered light source direction (as seen from the shaded point) we need to know if that light source is blocked or not by other geometries in the scene (this is the non-local part as it involves interactions with other surfaces other than the one being shaded)
  3. The incoming light value.
We take those three functions, multiply them together, and gather for each direction the amount of lighting scattered back to the camera. If we consider only direct lighting (i.e. the direct effect of the lights on the surfaces, not considering that the amount of lighting not scattered into the camera actually bounces around the environment, acting as other, indirect light sources) that amounts to, for each light, to take the light intensity, scale it by the occlusion (that's a binary function) and by the BRDF.
Or, if we see those functions as hemispherical maps (by fixing a given outcoming direction for the BRDF) then what we're doing is multiplying the maps together and computing the integral of the result (and this is exactly what Spherical Harmonic Lighting methods do, read this and this one).

In our current rendering work, what we do is to further (usually) split that integral into three parts, corresponding to different frequencies of lighting (an interesting read related to that is the presentation of the Halo 3 material and lighting system):
  1. The ambient term, that's the diffuse (Lambertian) response of the material to a light that's everywhere in the environment and emits in every direction with a constant intensity. Note that the sum of the correct visibility function for that light exactly is what is called ambient occlusion.
  2. The diffuse term, the part of the material response that is view-independent. It's the Lambert model of light scattering, its BRDF is constant, only the cosine term is considered.
  3. The specular term, responsible for the glossy highlights, like the Phong or the Blinn model.
Note: the sum of those three terms should give you a spherical function that integrates at most to one, otherwise you have a material that's not scattering light, but emitting it!
Computing the second and the third term mostly depend on our choice of how to represent lights. Given a light model, the correct visibility function for that light function has to be found.

Using the wrong shadow model will cause incorrect results, worst thing, as our occlusion functions usually are approximate too (i.e. shadow maps) having them to operate in regions that should already be in shadow by the local lighting, but they happen not to be due to inconsitencies between local and direct models (i.e. lighting with N lights but shadowing only the most influential one to save rendering time), shows those defects even those regions.

Update: I've moved the second part of this article here.

Rendering equation in the real world (my apartment)

4 comments:

Anonymous said...

Wow, great view !

Unknown said...

"But care has to be taken when computing those cubemaps. They have to be the light function convolved with the term of the local lighting model we're considering."
Can you speak english please? :)
Once upon a time I wanted to create a simple lighting for an animated character in my demo. The scene relied heavily on HDR and indirect light bounces. So I wanted some kind of a fake, cheap GI for my character. To do this I placed a cube in the scene, roughly in the place of the character, unwrapped it, applied smoothing (so that it became a sphere) and baked the lighting in max. From the baked texture I created a cubemap. And I used the (deforming with bones) normals of my character to sample the cubemap in the shader.
So is it “the light function convolved with the term of the local lighting model we're considering."? :)
Looks OK, but the lack of dynamic ambient occlusion is the very first thing that comes to my mind.

DEADC0DE said...

Can't understand why you smoothed the cube into a sphere. If the cube itself was of a perfectly diffuse material then yes, that was correct, for a diffuse cubemap (if you render it with global illumination enabled). Because for each texel of your cubemap what max did is exactly to see how much light that texel receives, and that is exactly the integral (sum) of all the incoming light weighted with the diffuse function (that is, a convolution). This does only work for diffuse cubemaps as purely diffuse response is view independent so you can bake it in that way.

That approach is also used in real world, where you can photograph some probes (spheres of a suitable material) and use them to reconstruct the lighting that you had in that environment.

Unknown said...

Yes, that was only for diffuse lighting. I smoothed out the cube to gather the light in Max uniformly from all directions. Not sure whether this really matters since the baked texture will end up as a cube map anyway.