Today I was walking on West Broadway to buy some camera goodies (this is a view from Cambie bridge, my appt is in the tallest building on the right)... It's winter but over the weekend it was sunny and nice. Turns out that I'm happy. Not that's strange, I'm usually happy, even if our minds are ruled by the Y combinator and our life's meaningless, we're human after all, and my meaningless life is quite good. It's just that not many times we stop to think about that, go for a walk, sing along, like a fool in a swimming pool.
Search this blog
26 October, 2008
20 October, 2008
Just blur
Note: this is an addendum to the previous post, even if it should be self-contained, I felt that the post was already too long to add this, and that the topic was too important to be written as an appendix...
How big a point is? Infinitesimal? Well, for sure you can pack two of them as close as you want, up to your floating point precision... But where does the dimension come to play a role in our CG model?
Let's take a simplified version of the scenario of my last post:
But let's look at that "blurring" operation a little bit closer... What are we doing? We blur, or pre-filter, because it's cheaper than supersampling, and post-filter... So, is it anti-aliasing? Yes, but not really... What we are doing is integrating a BRDF, the blur we apply is similar (even if way more incorrect) with the convolution we do on a cubemap (or equivalent) encoding the lights surrounding an object to have a lookup table for diffuse or specular illumination.
It's the same operation! In my previous post I said that I did consider the surface as a perfect mirror, with a Dirac delta, perfectly specular, BRDF. Now the reflected texture is exactly representing the lightsources in our scene, or some light-sources, the first bounce indirect ones (all the objects in the scene that directly reflect light from energy emitting surfaces). If we convolve it with a specular BRDF we get again the same image, indexed with the surface normals of the surface we're computing the shading on. But if we blur it, it's a way of convolving the same scene with a non perfectly diffuse BRDF!
In my implementation I used for various reasons only mipmaps, that are not really a blur... The nice thing would be, for higher quality, to use a real blur with a BRDF-shaped kernel that sits on the reflection plane (so it will end up in an ellipsoid when projected in image space)...
In that context, we need all those complications because we don't know another way of convolving the first bounce indirect lighting with our surfaces, we don't have a closed form solution of the rendering equation with that BRDF, that means, we can't express that shading as a simple lighting model (as is for example a Phong BRDF with a point light source).
What does that show? It shows us a dimension that we have in our computer graphic framework, that's implyied by the statistical model of the BRDF. We take our real world, physical surfaces, that are rough and imperfect if we look them close enough (but not too close, otherwise the whole geometrical optics theory does not apply), and we choose a minimum dimension and see how that roughness is oriented, we take a mean over that dimension, and capture that in a BRDF.
Note how this low-pass filter, or blurring, over the world dimensions is very common, in fact, is the base of every mathematical model in physics (and remember that calculus does not like disconinuties...). Models always have an implied dimension, if you look at your phenonema so that one becomes relevant, the model "breaks".
The problem is that in our case, we push that dimension to be quite large, not because we want to avoid to enter the quantum physics region, but because we don't want to deal with the problem of explicitly integrating high frequencies, and so we assume our surfaces to be flat, and capture the high frequencies only as an illumination issue, in the BRDF, in that way, the dimension that we choose depends on the distance we're looking at a surface, and it can become easily in the order of millimeters.
We always blur... We prefer to pre-blur instead of post-blurring, as the latter is way more expensive, but in the end what we want to do is to reduce all our frequencies (geometry, illumination etc) to the sampling one.
What does that imply, in practice? How is that relevant to our day to day work?
What if our surfaces have details of that dimension? Well things generally don't go well... That's why our artists in Milestone, when we were doing road shading, found impossible to create a roughness normal map for the tracks, it looked bad...
We ended up using normalmaps only for big cracks and for the wet track effect, as I explained.
It also means that even for the wet track, it's wise to use the normalmap only for the reflection, and not for the local lighting model... the water surface and the underlying asphalt layer have a much better chance to look good using geometric normals, maybe modulating the specular highlights using a noisy specular map, i.e. using the lenght of the Z component (the axis aligned to the geometric normal) of the (tangent space) roughness normalmap...
Note: If I remember correctly, in my early tests I was using the normalmap only for the water reflection, not using a separate specular for the water (so that layer was made only by the indirect reflection), using a specular without the normalmap for the asphalt layer), all those are details, but the interesting thing that I hope I showed here is why this kind of combination for shading that surface did work better than others...
How big a point is? Infinitesimal? Well, for sure you can pack two of them as close as you want, up to your floating point precision... But where does the dimension come to play a role in our CG model?
Let's take a simplified version of the scenario of my last post:
- We want to simulate a rough, planar mirror.
- We render-to-texture a mirrored scene, as usual.
- We take a normalmap for the roughness.
- We fetch texels from our mirrored scene texture using a screen-space UV but...
- ...we distort that UV by an amount proportional to the tangent-space projection of the normalmap.
But let's look at that "blurring" operation a little bit closer... What are we doing? We blur, or pre-filter, because it's cheaper than supersampling, and post-filter... So, is it anti-aliasing? Yes, but not really... What we are doing is integrating a BRDF, the blur we apply is similar (even if way more incorrect) with the convolution we do on a cubemap (or equivalent) encoding the lights surrounding an object to have a lookup table for diffuse or specular illumination.
It's the same operation! In my previous post I said that I did consider the surface as a perfect mirror, with a Dirac delta, perfectly specular, BRDF. Now the reflected texture is exactly representing the lightsources in our scene, or some light-sources, the first bounce indirect ones (all the objects in the scene that directly reflect light from energy emitting surfaces). If we convolve it with a specular BRDF we get again the same image, indexed with the surface normals of the surface we're computing the shading on. But if we blur it, it's a way of convolving the same scene with a non perfectly diffuse BRDF!
In my implementation I used for various reasons only mipmaps, that are not really a blur... The nice thing would be, for higher quality, to use a real blur with a BRDF-shaped kernel that sits on the reflection plane (so it will end up in an ellipsoid when projected in image space)...
In that context, we need all those complications because we don't know another way of convolving the first bounce indirect lighting with our surfaces, we don't have a closed form solution of the rendering equation with that BRDF, that means, we can't express that shading as a simple lighting model (as is for example a Phong BRDF with a point light source).
What does that show? It shows us a dimension that we have in our computer graphic framework, that's implyied by the statistical model of the BRDF. We take our real world, physical surfaces, that are rough and imperfect if we look them close enough (but not too close, otherwise the whole geometrical optics theory does not apply), and we choose a minimum dimension and see how that roughness is oriented, we take a mean over that dimension, and capture that in a BRDF.
Note how this low-pass filter, or blurring, over the world dimensions is very common, in fact, is the base of every mathematical model in physics (and remember that calculus does not like disconinuties...). Models always have an implied dimension, if you look at your phenonema so that one becomes relevant, the model "breaks".
The problem is that in our case, we push that dimension to be quite large, not because we want to avoid to enter the quantum physics region, but because we don't want to deal with the problem of explicitly integrating high frequencies, and so we assume our surfaces to be flat, and capture the high frequencies only as an illumination issue, in the BRDF, in that way, the dimension that we choose depends on the distance we're looking at a surface, and it can become easily in the order of millimeters.
We always blur... We prefer to pre-blur instead of post-blurring, as the latter is way more expensive, but in the end what we want to do is to reduce all our frequencies (geometry, illumination etc) to the sampling one.
What does that imply, in practice? How is that relevant to our day to day work?
What if our surfaces have details of that dimension? Well things generally don't go well... That's why our artists in Milestone, when we were doing road shading, found impossible to create a roughness normal map for the tracks, it looked bad...
We ended up using normalmaps only for big cracks and for the wet track effect, as I explained.
It also means that even for the wet track, it's wise to use the normalmap only for the reflection, and not for the local lighting model... the water surface and the underlying asphalt layer have a much better chance to look good using geometric normals, maybe modulating the specular highlights using a noisy specular map, i.e. using the lenght of the Z component (the axis aligned to the geometric normal) of the (tangent space) roughness normalmap...
Note: If I remember correctly, in my early tests I was using the normalmap only for the water reflection, not using a separate specular for the water (so that layer was made only by the indirect reflection), using a specular without the normalmap for the asphalt layer), all those are details, but the interesting thing that I hope I showed here is why this kind of combination for shading that surface did work better than others...
18 October, 2008
Impossible is approximatively possible
I'm forcing myself to drive this blog more towards practical matters and less towards anti-C++ rants and how cool other languages are (hint, hint)... The problem with that is about my to do list, that nowdays, after work, is full of non programming tasks... But anyway, let's move on with today's topic: simuating realistic reflections for wet asphalt.
I was really happy to see the preview of MotoGP'08. It's in some way the sequel of the last game I did in Italy, SuperBike'07, it's based on the same base technology that my fellow collegues of Milestone's R&D group and I developed. It was a huge work, five people working on five platform, writing everything almost from scratch (while the game itself was still based on the solid grounds of our oldgen ones, the 3d engine and the tools started from zero).
One of the effects I took care of was the wet road shading. I don't know about the technology of the actual shipped games, I can guess it's an improved version of my original work, that's not really important for this post, what I want to describe is the creative process of approximating a physical effect...
Everything starts from the requirements. Unfortunately at that time we didn't have any formal process for that, we were not "agile", we were just putting our best effort without much strategy. So all I got was a bunch of reference pictures, the games in our library to look for other implementations of the same idea, and a lot of talking with our art director. Searching on the web I found a couple of papers, one a little bit old but geared specifically towards driving simulations.
The basics of the shader were easy:
Everything here is seems straightfoward and can be done with various levels of sophistication, for example an idea that we discarded, as was complicated to handle by the gameplay, was to have the bikes dynamically interact with the water, drying the areas they passed over.
The problem comes when you try to implement the last point, the water reflections. Reflections from planar mirrors are very easy, you have only to render the scene transformed by the mirror's plane in a separate pass and you're done. A race track itself is not flat but this is not a huge problem, it's almost impossible to notice the error if you handle the bikes correctly (mirroring them with a "local" plane located just under them, if you use the same plane for all of them some reflections will appear to be detached from the contact point between the tires and the ground).
Easy, you can code that in no time, and it will look like a marble plane... The problem is that the asphalt, even when wet, has still a pretty rought surface, and thus it won't behave as a perfect mirror, it will more be like a broken one. Art direction asked for realistic reflections, so... for sure not like that.
Let's stop thinking about the hacks and let's think about what happens in the real world... Let's follow a ray of light that went from a light to an object, then to the asphalt and then to the eye/camera... backwards (under the framework of geometrical optics, that's what we use for compute graphics, you can always go backward, for more details see the famous ph.D. thesis by Eric Veach)!
So we start at the camera, we go towards the track point we're considering, from there it went towards a point on a bike. In which direction? In general, we can't know, any possible directon could make the connection if it does not have a BRDF value of zero, otherwise that connection will have no effect on the shading of the track and thus we won't be able to see it. After bouncing in that direction, it travels for an unknown distance, reaches the bike, and from there it goes towards a light, for which we know the location.
Now simulating all this is impossible, we have two things that we don't know, the reflection direction and the travelled light ray distance between the track and the bike, and those are possible to compute only using raytracing...
Let's try now to fill the holes using some approximations that we can easily compute on a GPU.
First of all we need the direction, that's easy, if we consider our reflections to be perfectly specular, the BRDF will be a dirac impulse, it will have only one direction for which it's non zero, and that is the reflected direction of the view ray (camera to track) around the (track) normal.
The second thing that we don't know is the distance it travelled, we can't compute that, it would require raytracing. In general reflections would require that, why are the planar mirror ones an exception? Because in that case the reflection rays are coherent, visibility can be computed per each point on the mirror using a projection matrix, but that's what rasterization is able to do!
If we can render planar mirrors, we can also compute the distance of each reflected object to the reflection plane. In fact it's really easy! So we do have a measure of the distance, but not the one that we want, the distance our reflected rays travels according to the rough asphalt normals, but the one it travels according to a smooth, marble-like surface. It's still something!
How to go from smooth, flat, to rough? Well the reflected vectors are not so distant, if we have the reflected point on a smooth mirror, we can reasonably think that the point the rough mirror will hit is more or less around the point the smooth mirror reflected. The idea is simple so, we just take the perfect reflection we have in the render-to-texture image, and instead of reading the "right" pixel we read a pixel around it, in a direction that will be the same as the difference vector between the smooth reflection vector and the rough one. But that's difference is the same that we have between the geometric normal and the normalmap one! Everything is going smooth... We only need to know how far to go in that direction, but that's not a huge problem too, we can approximate that with the distance between the point we would have hit with a perfectly smooth mirror and the mirror itself, that distance is straightforward to compute when rendering the perfect reflection texture or in a second pass, by resolving the zbuffer of the reflection render.
Let's code this:
I was really happy to see the preview of MotoGP'08. It's in some way the sequel of the last game I did in Italy, SuperBike'07, it's based on the same base technology that my fellow collegues of Milestone's R&D group and I developed. It was a huge work, five people working on five platform, writing everything almost from scratch (while the game itself was still based on the solid grounds of our oldgen ones, the 3d engine and the tools started from zero).
One of the effects I took care of was the wet road shading. I don't know about the technology of the actual shipped games, I can guess it's an improved version of my original work, that's not really important for this post, what I want to describe is the creative process of approximating a physical effect...
Everything starts from the requirements. Unfortunately at that time we didn't have any formal process for that, we were not "agile", we were just putting our best effort without much strategy. So all I got was a bunch of reference pictures, the games in our library to look for other implementations of the same idea, and a lot of talking with our art director. Searching on the web I found a couple of papers, one a little bit old but geared specifically towards driving simulations.
The basics of the shader were easy:
- A wet road is a two layer material, the dry asphalt with a layer of water on top. We will simply alpha blend (lerp) between the two.
- We want to have a variable water level, or wetness, on the track surface.
- The water layer is mostly about specular reflection.
- As we don't have ponds on race tracks, we could ignore the bending of light caused by the refraction (so we consider the IOR of the water to be the same as the air's one).
- Water will reflect the scene lights using a Blinn BRDF.
- Water will have the same normals as the underlying asphalt if the water layer is thin, but it will "fill" asphalt discontinuities if it thick enough. That's easy if the asphalt has a normalmap, we simply interpolate that with the original geometry normal proportionally with the water level.
- We need the reflection of the scene objects into the water.
Everything here is seems straightfoward and can be done with various levels of sophistication, for example an idea that we discarded, as was complicated to handle by the gameplay, was to have the bikes dynamically interact with the water, drying the areas they passed over.
The problem comes when you try to implement the last point, the water reflections. Reflections from planar mirrors are very easy, you have only to render the scene transformed by the mirror's plane in a separate pass and you're done. A race track itself is not flat but this is not a huge problem, it's almost impossible to notice the error if you handle the bikes correctly (mirroring them with a "local" plane located just under them, if you use the same plane for all of them some reflections will appear to be detached from the contact point between the tires and the ground).
Easy, you can code that in no time, and it will look like a marble plane... The problem is that the asphalt, even when wet, has still a pretty rought surface, and thus it won't behave as a perfect mirror, it will more be like a broken one. Art direction asked for realistic reflections, so... for sure not like that.
Let's stop thinking about the hacks and let's think about what happens in the real world... Let's follow a ray of light that went from a light to an object, then to the asphalt and then to the eye/camera... backwards (under the framework of geometrical optics, that's what we use for compute graphics, you can always go backward, for more details see the famous ph.D. thesis by Eric Veach)!
So we start at the camera, we go towards the track point we're considering, from there it went towards a point on a bike. In which direction? In general, we can't know, any possible directon could make the connection if it does not have a BRDF value of zero, otherwise that connection will have no effect on the shading of the track and thus we won't be able to see it. After bouncing in that direction, it travels for an unknown distance, reaches the bike, and from there it goes towards a light, for which we know the location.
Now simulating all this is impossible, we have two things that we don't know, the reflection direction and the travelled light ray distance between the track and the bike, and those are possible to compute only using raytracing...
Let's try now to fill the holes using some approximations that we can easily compute on a GPU.
First of all we need the direction, that's easy, if we consider our reflections to be perfectly specular, the BRDF will be a dirac impulse, it will have only one direction for which it's non zero, and that is the reflected direction of the view ray (camera to track) around the (track) normal.
The second thing that we don't know is the distance it travelled, we can't compute that, it would require raytracing. In general reflections would require that, why are the planar mirror ones an exception? Because in that case the reflection rays are coherent, visibility can be computed per each point on the mirror using a projection matrix, but that's what rasterization is able to do!
If we can render planar mirrors, we can also compute the distance of each reflected object to the reflection plane. In fact it's really easy! So we do have a measure of the distance, but not the one that we want, the distance our reflected rays travels according to the rough asphalt normals, but the one it travels according to a smooth, marble-like surface. It's still something!
How to go from smooth, flat, to rough? Well the reflected vectors are not so distant, if we have the reflected point on a smooth mirror, we can reasonably think that the point the rough mirror will hit is more or less around the point the smooth mirror reflected. The idea is simple so, we just take the perfect reflection we have in the render-to-texture image, and instead of reading the "right" pixel we read a pixel around it, in a direction that will be the same as the difference vector between the smooth reflection vector and the rough one. But that's difference is the same that we have between the geometric normal and the normalmap one! Everything is going smooth... We only need to know how far to go in that direction, but that's not a huge problem too, we can approximate that with the distance between the point we would have hit with a perfectly smooth mirror and the mirror itself, that distance is straightforward to compute when rendering the perfect reflection texture or in a second pass, by resolving the zbuffer of the reflection render.
Let's code this:
// Store a copy of the POSITION register in another register (POSITION is not
// readable in pixel shader S.M. <>
float2 perfectReflUV = (IN.CopyPos.xy / IN.CopyPos.w)*float2(0.5f,-0.5f) + 0.5f;
// Fetch from the screenspace reflection map, the approximation of the track to
// reflected object distance... It has to be normalized between zero and one.
float reflectionDistance = tex2D(REFLECTIONMAP, perfectReflUV).a;
// Compute a distortion approximaton by scaling by a constant factor the normalmap
// normal (expressed in tangent space)
float2 distortionApprox = normalMapNormalTGS.xy * DISTORTIONFACTOR;
// Fetch the final reflected object color...
float2 reflUV = perfectReflUV + distortionApprox * reflectionDistance;
float3 reflection = tex2D(REFLECTIONMAP, reflUV).rgb;
// Store a copy of the POSITION register in another register (POSITION is not
// readable in pixel shader S.M. <>
float2 perfectReflUV = (IN.CopyPos.xy / IN.CopyPos.w)*float2(0.5f,-0.5f) + 0.5f;
// Compute a distortion approximaton by scaling by a constant factor the normalmap
// normal (expressed in tangent space)... 0.5f is an estimate of the "right"
// reflectionDistance that we don't know (we should raymarch to find it...)
float2 distortionApprox = normalMapNormalTGS.xy * DISTORTIONFACTOR * 0.5f;
// Fetch from the screenspace reflection map, the approximation of the track to
// reflected object distance... It has to be normalized between zero and one.
float reflectionDistance = tex2D(REFLECTIONMAP, perfectReflUV + distortionApprox).a;
distortionApprox = normalMapNormalTGS.xy * DISTORTIONFACTOR * reflectionDistance;
// we could continue iterating to find an intersection, but we don't...
// Fetch the final reflected object color:
float2 reflUV = perfectReflUV + distortionApprox;
float4 relfUV_LOD = float4(float4(reflUV,0,REFLECTIONMAP_MIPMAP_LEVELS * reflectionDistance));
float3 reflection = tex2Dlod(REFLECTIONMAP, relfUV_LOD);
// readable in pixel shader S.M. <>
float2 perfectReflUV = (IN.CopyPos.xy / IN.CopyPos.w)*float2(0.5f,-0.5f) + 0.5f;
// Fetch from the screenspace reflection map, the approximation of the track to
// reflected object distance... It has to be normalized between zero and one.
float reflectionDistance = tex2D(REFLECTIONMAP, perfectReflUV).a;
// Compute a distortion approximaton by scaling by a constant factor the normalmap
// normal (expressed in tangent space)
float2 distortionApprox = normalMapNormalTGS.xy * DISTORTIONFACTOR;
// Fetch the final reflected object color...
float2 reflUV = perfectReflUV + distortionApprox * reflectionDistance;
float3 reflection = tex2D(REFLECTIONMAP, reflUV).rgb;
That actually works, but it will be very noisy, especially when animated. Why? Because the frequency of our UV distortion can be very high, as it depends on the track normalmap, and the track is nearly parallel to the view direction, so its texture mapping frequencies are easily very high (that's why for racing games anisotropic filtering is a must). That's very unpleasing especially when animated.
How do we fight high frequencies? Well, with supersampling! But that's expensive... Other ideas? Who said prefiltering? We could blur our distorted image... well, but that's quite like blurring the reflection image... well, but that's quite possible by generating some mipmaps for it! We know how much we are distorting the reads from that image, so we could choose our mipmap level based on that...
Ok, we're ready for the final version of our code now... I've also implemented another slight improvement, I read the distance from a pre-distorted UV... That will cause some reflections of the near objects to leak into the far ones (i.e. the sky) but the previous version had the opposite problem, that was in my opinion more noticeable... Enjoy!
How do we fight high frequencies? Well, with supersampling! But that's expensive... Other ideas? Who said prefiltering? We could blur our distorted image... well, but that's quite like blurring the reflection image... well, but that's quite possible by generating some mipmaps for it! We know how much we are distorting the reads from that image, so we could choose our mipmap level based on that...
Ok, we're ready for the final version of our code now... I've also implemented another slight improvement, I read the distance from a pre-distorted UV... That will cause some reflections of the near objects to leak into the far ones (i.e. the sky) but the previous version had the opposite problem, that was in my opinion more noticeable... Enjoy!
// Store a copy of the POSITION register in another register (POSITION is not
// readable in pixel shader S.M. <>
float2 perfectReflUV = (IN.CopyPos.xy / IN.CopyPos.w)*float2(0.5f,-0.5f) + 0.5f;
// Compute a distortion approximaton by scaling by a constant factor the normalmap
// normal (expressed in tangent space)... 0.5f is an estimate of the "right"
// reflectionDistance that we don't know (we should raymarch to find it...)
float2 distortionApprox = normalMapNormalTGS.xy * DISTORTIONFACTOR * 0.5f;
// Fetch from the screenspace reflection map, the approximation of the track to
// reflected object distance... It has to be normalized between zero and one.
float reflectionDistance = tex2D(REFLECTIONMAP, perfectReflUV + distortionApprox).a;
distortionApprox = normalMapNormalTGS.xy * DISTORTIONFACTOR * reflectionDistance;
// we could continue iterating to find an intersection, but we don't...
// Fetch the final reflected object color:
float2 reflUV = perfectReflUV + distortionApprox;
float4 relfUV_LOD = float4(float4(reflUV,0,REFLECTIONMAP_MIPMAP_LEVELS * reflectionDistance));
float3 reflection = tex2Dlod(REFLECTIONMAP, relfUV_LOD);
Last but not least, you'll notice that I haven't talked much about programmer-artist iteration, even if I'm kinda an "evangelist" of that. Why? It's simple, if you're asked to reproduce the reality, then you know what you want, if you do that by approximating the real thing you know which errors you're doing, hardly there's much to iterate. Of course the final validation has to be given by the art direction, of course they can say it looks like crap and they prefer a hack over your nicely crafted, physical inspired code... But that did not happen, and in that case, a physically based effect requires usually way less parameters, and thus tuning and iteration, than a hack-based one...
Update: continues here...
Update: some slight changes to the "final code"
Update: I didn't provide many details about my use of texture mipmaps as an approximation of various blur levels... That's of course wrong, it may be very wrong if you have small emitting objects (i.e. headlights or traffic lights) in your reflection map. In that case you might want to cheat and render those object with a halo (particles...) around them, to "blur" more without extra rendering costs, or do the right thing, use a 3d texture map instead of mipmap levels, blur each z-slice with different kernel widths, maybe consider some way of HDR color encoding...
Update: continues here...
Update: some slight changes to the "final code"
Update: I didn't provide many details about my use of texture mipmaps as an approximation of various blur levels... That's of course wrong, it may be very wrong if you have small emitting objects (i.e. headlights or traffic lights) in your reflection map. In that case you might want to cheat and render those object with a halo (particles...) around them, to "blur" more without extra rendering costs, or do the right thing, use a 3d texture map instead of mipmap levels, blur each z-slice with different kernel widths, maybe consider some way of HDR color encoding...
14 October, 2008
Normals without Normals
Long time no write! Just a small post, I'll publish some sourcecode snippets for the Normals without Normals hack... More to come!
The main idea is that we can compute normals easily in a pixel shader using ddx/ddy instructions... The problem of that technique is that we'll end up with real normals, not the interpolated ones that we need for Gouraud shading... To solve this problem we render the geometry in two passes. In a first pass, we render the geometry to a texture, then we blur that texture, and access it in the standard forward rendering pass as a normalmap...
Note that the same ddx/ddy technique can be used to compute a tangent base, that's especially useful if you don't have it, or don't have the vertex bandwidth for one... You can find the details of that technique in ShaderX 5 (Normal Mapping without Pre-Computed Tangents by Christian Schueler, the only catch is that there the tangent space is not reorthonormalized around the Gouraud-interpolated normal, but that's easy to do).
NormalBakeVS_Out NormalBakeVS(GeomVS_In In)
{
NormalBakeVS_Out Out;
Out.Pos = float4(In.UV * float2(2,-2) + float2(-1,1),0,1);
Out.NormPos = mul( In.Pos,WorldViewM);
return Out;
}
float4 NormalBakePS(NormalBakeVS_Out In) : COLOR
{
float3 d1 = ddx(In.NormPos);
float3 d2 = ddy(In.NormPos);
float3 normal = normalize(cross(d1,d2)); // this normal is dp/du X dp/dv
// NOTE: normal.z is always positive as we bake normals in view-space
return float4(normal.xy * 0.5 + 0.5);
}
The model should have a suitable UV mapping. That mapping, in order for this technique to work well should respect the following properties (in order of importance...):
Dicontinuties are hard to avoid, if present they can be made less obvious by passing to the normal baking a mesh that is extended across the discontinuities. For each edge in UV space, you can extrude that edge out (creating a polygon band around it, that will be rendered only for baking) overlapping the existing mesh geometry but with a mapping adjiacent to the edge in UV space...
The "non full" UV space problem (last point) is addressed by discarding samples, during the blur phase, in areas that were not written by the mesh polygons. Another approach could be the use of pyramidal filters and "inpaiting" (see the work of Kraus and Strengert).
As ATI demonstrated with the subsurface scattering technique, it's possible to save some computations by discarding non-visible triangles in the render to texture passes using early-Z (see Applications of Explicit Early-Z Culling)
In the second rendering pass, we simply recover the normal stored in the render to texture surface, and that's it:
float4 GeomPS(GeomVS_Out In) : COLOR
{
float2 samp = tex2D(BakeSampler,In.UV.xy) * 2 - 1;
float3 normal_sharp = float3(samp, sqrt(1 - dot(samp,samp)));
...
}
Note: the main point is that there are a lot of different spaces we can express our computations into, often choosing the right one is the key of solving a problem, especially on the GPU where we are limited by its computational model. Don't take my implementation too seriously, it's just an experiment around an idea. Actually it's probably simpler to do the same in screenspace for example, devising a smart way to compute the blur kernel size, i.e. function of the triangle projected size (that can be estimated with the derivatives)...
The main idea is that we can compute normals easily in a pixel shader using ddx/ddy instructions... The problem of that technique is that we'll end up with real normals, not the interpolated ones that we need for Gouraud shading... To solve this problem we render the geometry in two passes. In a first pass, we render the geometry to a texture, then we blur that texture, and access it in the standard forward rendering pass as a normalmap...
Note that the same ddx/ddy technique can be used to compute a tangent base, that's especially useful if you don't have it, or don't have the vertex bandwidth for one... You can find the details of that technique in ShaderX 5 (Normal Mapping without Pre-Computed Tangents by Christian Schueler, the only catch is that there the tangent space is not reorthonormalized around the Gouraud-interpolated normal, but that's easy to do).
NormalBakeVS_Out NormalBakeVS(GeomVS_In In)
{
NormalBakeVS_Out Out;
Out.Pos = float4(In.UV * float2(2,-2) + float2(-1,1),0,1);
Out.NormPos = mul( In.Pos,WorldViewM);
return Out;
}
float4 NormalBakePS(NormalBakeVS_Out In) : COLOR
{
float3 d1 = ddx(In.NormPos);
float3 d2 = ddy(In.NormPos);
float3 normal = normalize(cross(d1,d2)); // this normal is dp/du X dp/dv
// NOTE: normal.z is always positive as we bake normals in view-space
return float4(normal.xy * 0.5 + 0.5);
}
The model should have a suitable UV mapping. That mapping, in order for this technique to work well should respect the following properties (in order of importance...):
- Two different point on the mesh should map to two different point in UV (COMPULSORY!)
- No discontinuties: UV mapping should not be discontinous on the mesh (note that if UV are accessed with wrapping, the UV space is toroidal...)
- No distortion: the shortest path between two points on the mesh should be the same as the distance in UV space up to a multiplicative constant
- Any point in UV space should map to a point on the mesh
Dicontinuties are hard to avoid, if present they can be made less obvious by passing to the normal baking a mesh that is extended across the discontinuities. For each edge in UV space, you can extrude that edge out (creating a polygon band around it, that will be rendered only for baking) overlapping the existing mesh geometry but with a mapping adjiacent to the edge in UV space...
The "non full" UV space problem (last point) is addressed by discarding samples, during the blur phase, in areas that were not written by the mesh polygons. Another approach could be the use of pyramidal filters and "inpaiting" (see the work of Kraus and Strengert).
As ATI demonstrated with the subsurface scattering technique, it's possible to save some computations by discarding non-visible triangles in the render to texture passes using early-Z (see Applications of Explicit Early-Z Culling)
In the second rendering pass, we simply recover the normal stored in the render to texture surface, and that's it:
float4 GeomPS(GeomVS_Out In) : COLOR
{
float2 samp = tex2D(BakeSampler,In.UV.xy) * 2 - 1;
float3 normal_sharp = float3(samp, sqrt(1 - dot(samp,samp)));
...
}
Note: the main point is that there are a lot of different spaces we can express our computations into, often choosing the right one is the key of solving a problem, especially on the GPU where we are limited by its computational model. Don't take my implementation too seriously, it's just an experiment around an idea. Actually it's probably simpler to do the same in screenspace for example, devising a smart way to compute the blur kernel size, i.e. function of the triangle projected size (that can be estimated with the derivatives)...
Subscribe to:
Posts (Atom)