Note: this is an addendum to the previous post, even if it should be self-contained, I felt that the post was already too long to add this, and that the topic was too important to be written as an appendix...
How big a point is? Infinitesimal? Well, for sure you can pack two of them as close as you want, up to your floating point precision... But where does the dimension come to play a role in our CG model?
Let's take a simplified version of the scenario of my last post:
But let's look at that "blurring" operation a little bit closer... What are we doing? We blur, or pre-filter, because it's cheaper than supersampling, and post-filter... So, is it anti-aliasing? Yes, but not really... What we are doing is integrating a BRDF, the blur we apply is similar (even if way more incorrect) with the convolution we do on a cubemap (or equivalent) encoding the lights surrounding an object to have a lookup table for diffuse or specular illumination.
It's the same operation! In my previous post I said that I did consider the surface as a perfect mirror, with a Dirac delta, perfectly specular, BRDF. Now the reflected texture is exactly representing the lightsources in our scene, or some light-sources, the first bounce indirect ones (all the objects in the scene that directly reflect light from energy emitting surfaces). If we convolve it with a specular BRDF we get again the same image, indexed with the surface normals of the surface we're computing the shading on. But if we blur it, it's a way of convolving the same scene with a non perfectly diffuse BRDF!
In my implementation I used for various reasons only mipmaps, that are not really a blur... The nice thing would be, for higher quality, to use a real blur with a BRDF-shaped kernel that sits on the reflection plane (so it will end up in an ellipsoid when projected in image space)...
In that context, we need all those complications because we don't know another way of convolving the first bounce indirect lighting with our surfaces, we don't have a closed form solution of the rendering equation with that BRDF, that means, we can't express that shading as a simple lighting model (as is for example a Phong BRDF with a point light source).
What does that show? It shows us a dimension that we have in our computer graphic framework, that's implyied by the statistical model of the BRDF. We take our real world, physical surfaces, that are rough and imperfect if we look them close enough (but not too close, otherwise the whole geometrical optics theory does not apply), and we choose a minimum dimension and see how that roughness is oriented, we take a mean over that dimension, and capture that in a BRDF.
Note how this low-pass filter, or blurring, over the world dimensions is very common, in fact, is the base of every mathematical model in physics (and remember that calculus does not like disconinuties...). Models always have an implied dimension, if you look at your phenonema so that one becomes relevant, the model "breaks".
The problem is that in our case, we push that dimension to be quite large, not because we want to avoid to enter the quantum physics region, but because we don't want to deal with the problem of explicitly integrating high frequencies, and so we assume our surfaces to be flat, and capture the high frequencies only as an illumination issue, in the BRDF, in that way, the dimension that we choose depends on the distance we're looking at a surface, and it can become easily in the order of millimeters.
We always blur... We prefer to pre-blur instead of post-blurring, as the latter is way more expensive, but in the end what we want to do is to reduce all our frequencies (geometry, illumination etc) to the sampling one.
What does that imply, in practice? How is that relevant to our day to day work?
What if our surfaces have details of that dimension? Well things generally don't go well... That's why our artists in Milestone, when we were doing road shading, found impossible to create a roughness normal map for the tracks, it looked bad...
We ended up using normalmaps only for big cracks and for the wet track effect, as I explained.
It also means that even for the wet track, it's wise to use the normalmap only for the reflection, and not for the local lighting model... the water surface and the underlying asphalt layer have a much better chance to look good using geometric normals, maybe modulating the specular highlights using a noisy specular map, i.e. using the lenght of the Z component (the axis aligned to the geometric normal) of the (tangent space) roughness normalmap...
Note: If I remember correctly, in my early tests I was using the normalmap only for the water reflection, not using a separate specular for the water (so that layer was made only by the indirect reflection), using a specular without the normalmap for the asphalt layer), all those are details, but the interesting thing that I hope I showed here is why this kind of combination for shading that surface did work better than others...
How big a point is? Infinitesimal? Well, for sure you can pack two of them as close as you want, up to your floating point precision... But where does the dimension come to play a role in our CG model?
Let's take a simplified version of the scenario of my last post:
- We want to simulate a rough, planar mirror.
- We render-to-texture a mirrored scene, as usual.
- We take a normalmap for the roughness.
- We fetch texels from our mirrored scene texture using a screen-space UV but...
- ...we distort that UV by an amount proportional to the tangent-space projection of the normalmap.
But let's look at that "blurring" operation a little bit closer... What are we doing? We blur, or pre-filter, because it's cheaper than supersampling, and post-filter... So, is it anti-aliasing? Yes, but not really... What we are doing is integrating a BRDF, the blur we apply is similar (even if way more incorrect) with the convolution we do on a cubemap (or equivalent) encoding the lights surrounding an object to have a lookup table for diffuse or specular illumination.
It's the same operation! In my previous post I said that I did consider the surface as a perfect mirror, with a Dirac delta, perfectly specular, BRDF. Now the reflected texture is exactly representing the lightsources in our scene, or some light-sources, the first bounce indirect ones (all the objects in the scene that directly reflect light from energy emitting surfaces). If we convolve it with a specular BRDF we get again the same image, indexed with the surface normals of the surface we're computing the shading on. But if we blur it, it's a way of convolving the same scene with a non perfectly diffuse BRDF!
In my implementation I used for various reasons only mipmaps, that are not really a blur... The nice thing would be, for higher quality, to use a real blur with a BRDF-shaped kernel that sits on the reflection plane (so it will end up in an ellipsoid when projected in image space)...
In that context, we need all those complications because we don't know another way of convolving the first bounce indirect lighting with our surfaces, we don't have a closed form solution of the rendering equation with that BRDF, that means, we can't express that shading as a simple lighting model (as is for example a Phong BRDF with a point light source).
What does that show? It shows us a dimension that we have in our computer graphic framework, that's implyied by the statistical model of the BRDF. We take our real world, physical surfaces, that are rough and imperfect if we look them close enough (but not too close, otherwise the whole geometrical optics theory does not apply), and we choose a minimum dimension and see how that roughness is oriented, we take a mean over that dimension, and capture that in a BRDF.
Note how this low-pass filter, or blurring, over the world dimensions is very common, in fact, is the base of every mathematical model in physics (and remember that calculus does not like disconinuties...). Models always have an implied dimension, if you look at your phenonema so that one becomes relevant, the model "breaks".
The problem is that in our case, we push that dimension to be quite large, not because we want to avoid to enter the quantum physics region, but because we don't want to deal with the problem of explicitly integrating high frequencies, and so we assume our surfaces to be flat, and capture the high frequencies only as an illumination issue, in the BRDF, in that way, the dimension that we choose depends on the distance we're looking at a surface, and it can become easily in the order of millimeters.
We always blur... We prefer to pre-blur instead of post-blurring, as the latter is way more expensive, but in the end what we want to do is to reduce all our frequencies (geometry, illumination etc) to the sampling one.
What does that imply, in practice? How is that relevant to our day to day work?
What if our surfaces have details of that dimension? Well things generally don't go well... That's why our artists in Milestone, when we were doing road shading, found impossible to create a roughness normal map for the tracks, it looked bad...
We ended up using normalmaps only for big cracks and for the wet track effect, as I explained.
It also means that even for the wet track, it's wise to use the normalmap only for the reflection, and not for the local lighting model... the water surface and the underlying asphalt layer have a much better chance to look good using geometric normals, maybe modulating the specular highlights using a noisy specular map, i.e. using the lenght of the Z component (the axis aligned to the geometric normal) of the (tangent space) roughness normalmap...
Note: If I remember correctly, in my early tests I was using the normalmap only for the water reflection, not using a separate specular for the water (so that layer was made only by the indirect reflection), using a specular without the normalmap for the asphalt layer), all those are details, but the interesting thing that I hope I showed here is why this kind of combination for shading that surface did work better than others...
1 comment:
Thanks.. it's helpful to learn how you approach a problem. IMHO many of the popular shader references are very good at the "how" and not so good at the "why" or "when".
Post a Comment