I'm not even halfway through "the exorcism of Emily Rose" so I still have to write. From some comments I've noticed that I wrote too much, too fast in my post about the rendering equation, so this is a (hopefully better, even if I doubt it, as I'm watching those scary movies) rewrite of its second part.
One of the things I'm confident of is that wathever models we provide the artits with, they are capable of tweaking them in unpredictable ways in order to make them fit the idea they want to express. That's great, but has two problems. The first one is that such tweaking could end up with suboptimal use of our precious, scarce, computing resources. The second, somewhat related to the other, is that bad models could be too complicated to fit, to find parameters that achieve the desidered look.
So what has the good rendering engineer to do in order to avoid those problems? To me the main thing is to always work together with the artists (a good idea is to look at what they're trying to do, ask them to make prototypes with their DCC tools, and then see how we can express in a more procedural way their art), known what they need, know the physics, and base our models both on artists needs and good maths. Good maths do not mean correct physics, we are far from that in realtime rendering, but reasonable physics, models that are based on solid ideas.
Now a simple example of how things can go wrong, it's something related to a small problem I had at work.
A nice trick, known since the software rendering times, is to simulate the specular reflections on a curved object by assuming that each point in that object sees the same scene. It's the basic environment mapping technique that has been around for years. We project an image of the surrounding environment on an infinite sphere or cube around the model, save it into a texture, and index that texture with the per-pixel reflection vector (camera to point vector reflected with the point normal). We can do it in realtime as well, and we usually do, i.e. in racing games.
That's intresting because we're shading the object by considering the light reflected from other objects towards it, so it's a rare example of simulating indirect global illumination (indirect specular, that's quite easy compared to indirect diffuse or worse, glossy).
But what is, that texturemap, encoding? Well, it's a spherical function, something that is indexed by a direction vector. It's an approximation of the spherical function that encodes the incoming radiance, the incoming light that scene objects are transmitting towards the point we want to shade.
Now for a purely specular mirror we have to notice that its BRDF is a Dirac puse, it's a spherical function that is one only along the reflection vector, and zero everywere else. That BRDF can be encoded in a two dimensional function (in general, BRDFs are four dimensional, i.e. they need two direction vectors, the incoming and outgoing one).
What happens if we convolve that BRDF with our approximated incoming light function, as the rendering equation does in order to compute the total incoming energy that is going to be scattered towards the view direction? Well we have a function that's zero everywhere and that the envmap texture value only along the reflection direction. That's exactly the same as taking only that sample from the envmap in the first place. So our envmapping algorithm is a reasonable approximation for a purely specular part of our material. Easy!
Now another thing that was easily discovered in those mystical early days is that if you replace your spherical reflection image with an image that encodes a phong lobe, you get a cheap way of doing phong shading (cheap when memory access was not so expensive compared to ALU instructions).
Why do that work? It does because what we're encoding in the envmap, for each direction is the convolution of the BRDF with the lighting function. In that case we are considering the light function as a Dirac impulse (a single point light), and convolving it with a phong lobe. Convolving something with a Dirac again results in an unchanged function, so we're storing the phong lobe in our texture map, and as Phong specular reflection model can be reduced to a bidimensional function, that precomputation works.
But we can think to be smarter, and not use Dirac impulses. We can take an arbitrary light configuration, convolve it with our specular model, index it with the reflection vector, and voilà, we have (an approximation of) the specular part of our shading. If we do the same, convolving this time the light function with a cosine lobe (Lambert model), and index that with the normal vector, we get the diffuse part as well.
This is a clever trick, that we use a lot nowdays, in some way it's the same thing we do with spherical harmonics too (SH are another way of storing spherical functions, they're really intresting but that's the subject for another post). You can use a cubemap indexed with the surface normal for the diffuse term and another indexed with the reflection vector for the glossy one. But care has to be taken when computing those cubemaps. They have to be the light function convolved with the term of the local lighting model we're considering, as we just said!
What is usually done instead is for the artists to use the gaussian blur in photoshop, or, if the cubemaps are generated in realtime, for the renderer to use a separable gaussian filter (as gaussians are the only circular filters that are separable).
One of the things I'm confident of is that wathever models we provide the artits with, they are capable of tweaking them in unpredictable ways in order to make them fit the idea they want to express. That's great, but has two problems. The first one is that such tweaking could end up with suboptimal use of our precious, scarce, computing resources. The second, somewhat related to the other, is that bad models could be too complicated to fit, to find parameters that achieve the desidered look.
So what has the good rendering engineer to do in order to avoid those problems? To me the main thing is to always work together with the artists (a good idea is to look at what they're trying to do, ask them to make prototypes with their DCC tools, and then see how we can express in a more procedural way their art), known what they need, know the physics, and base our models both on artists needs and good maths. Good maths do not mean correct physics, we are far from that in realtime rendering, but reasonable physics, models that are based on solid ideas.
Now a simple example of how things can go wrong, it's something related to a small problem I had at work.
A nice trick, known since the software rendering times, is to simulate the specular reflections on a curved object by assuming that each point in that object sees the same scene. It's the basic environment mapping technique that has been around for years. We project an image of the surrounding environment on an infinite sphere or cube around the model, save it into a texture, and index that texture with the per-pixel reflection vector (camera to point vector reflected with the point normal). We can do it in realtime as well, and we usually do, i.e. in racing games.
That's intresting because we're shading the object by considering the light reflected from other objects towards it, so it's a rare example of simulating indirect global illumination (indirect specular, that's quite easy compared to indirect diffuse or worse, glossy).
But what is, that texturemap, encoding? Well, it's a spherical function, something that is indexed by a direction vector. It's an approximation of the spherical function that encodes the incoming radiance, the incoming light that scene objects are transmitting towards the point we want to shade.
Now for a purely specular mirror we have to notice that its BRDF is a Dirac puse, it's a spherical function that is one only along the reflection vector, and zero everywere else. That BRDF can be encoded in a two dimensional function (in general, BRDFs are four dimensional, i.e. they need two direction vectors, the incoming and outgoing one).
What happens if we convolve that BRDF with our approximated incoming light function, as the rendering equation does in order to compute the total incoming energy that is going to be scattered towards the view direction? Well we have a function that's zero everywhere and that the envmap texture value only along the reflection direction. That's exactly the same as taking only that sample from the envmap in the first place. So our envmapping algorithm is a reasonable approximation for a purely specular part of our material. Easy!
Now another thing that was easily discovered in those mystical early days is that if you replace your spherical reflection image with an image that encodes a phong lobe, you get a cheap way of doing phong shading (cheap when memory access was not so expensive compared to ALU instructions).
Why do that work? It does because what we're encoding in the envmap, for each direction is the convolution of the BRDF with the lighting function. In that case we are considering the light function as a Dirac impulse (a single point light), and convolving it with a phong lobe. Convolving something with a Dirac again results in an unchanged function, so we're storing the phong lobe in our texture map, and as Phong specular reflection model can be reduced to a bidimensional function, that precomputation works.
But we can think to be smarter, and not use Dirac impulses. We can take an arbitrary light configuration, convolve it with our specular model, index it with the reflection vector, and voilà, we have (an approximation of) the specular part of our shading. If we do the same, convolving this time the light function with a cosine lobe (Lambert model), and index that with the normal vector, we get the diffuse part as well.
This is a clever trick, that we use a lot nowdays, in some way it's the same thing we do with spherical harmonics too (SH are another way of storing spherical functions, they're really intresting but that's the subject for another post). You can use a cubemap indexed with the surface normal for the diffuse term and another indexed with the reflection vector for the glossy one. But care has to be taken when computing those cubemaps. They have to be the light function convolved with the term of the local lighting model we're considering, as we just said!
What is usually done instead is for the artists to use the gaussian blur in photoshop, or, if the cubemaps are generated in realtime, for the renderer to use a separable gaussian filter (as gaussians are the only circular filters that are separable).
But a gaussian is not a cos lobe nor a phong one! And I seriously doubt that artists are going to find a gaussian that is a good approximation of those as well. And even if they do that, filtering the cubemap faces is not the same as applying a convolution to the spherical function the cubemap is representing (the equivalent convolution will be distorted towards the vertices of the cubemap, as a cubemap has not the same topology of a sphere, its texels are not equally distant when projected on the sphere, so we have to consider that when applying our filter!).
Moreover, we can't have a pair of cubemaps for each different specular function, so we have to choose a convolution size and hack some kind of exponential AFTER the cubemap access!
That led to complains about the inability of finding the right blur sizes to achieve the correct look, that wasn't a failure of the cubemaps, but of how we were using them. That incorrect model could not be made to fit the look we wanted. In theory, that was a good approximation for diffuse and phong shading arbitrary light sources, in practice, the implementation details made our approximation different from what we were thinking, and in the end, bad.
Update: HDRShop is capable of doing the right convolutions on cubemaps, as I described there!
Update: HDRShop is capable of doing the right convolutions on cubemaps, as I described there!
No comments:
Post a Comment