Long time no write! Just a small post, I'll publish some sourcecode snippets for the Normals without Normals hack... More to come!
The main idea is that we can compute normals easily in a pixel shader using ddx/ddy instructions... The problem of that technique is that we'll end up with real normals, not the interpolated ones that we need for Gouraud shading... To solve this problem we render the geometry in two passes. In a first pass, we render the geometry to a texture, then we blur that texture, and access it in the standard forward rendering pass as a normalmap...
Note that the same ddx/ddy technique can be used to compute a tangent base, that's especially useful if you don't have it, or don't have the vertex bandwidth for one... You can find the details of that technique in ShaderX 5 (Normal Mapping without Pre-Computed Tangents by Christian Schueler, the only catch is that there the tangent space is not reorthonormalized around the Gouraud-interpolated normal, but that's easy to do).
NormalBakeVS_Out NormalBakeVS(GeomVS_In In)
{
NormalBakeVS_Out Out;
Out.Pos = float4(In.UV * float2(2,-2) + float2(-1,1),0,1);
Out.NormPos = mul( In.Pos,WorldViewM);
return Out;
}
float4 NormalBakePS(NormalBakeVS_Out In) : COLOR
{
float3 d1 = ddx(In.NormPos);
float3 d2 = ddy(In.NormPos);
float3 normal = normalize(cross(d1,d2)); // this normal is dp/du X dp/dv
// NOTE: normal.z is always positive as we bake normals in view-space
return float4(normal.xy * 0.5 + 0.5);
}
The model should have a suitable UV mapping. That mapping, in order for this technique to work well should respect the following properties (in order of importance...):
Dicontinuties are hard to avoid, if present they can be made less obvious by passing to the normal baking a mesh that is extended across the discontinuities. For each edge in UV space, you can extrude that edge out (creating a polygon band around it, that will be rendered only for baking) overlapping the existing mesh geometry but with a mapping adjiacent to the edge in UV space...
The "non full" UV space problem (last point) is addressed by discarding samples, during the blur phase, in areas that were not written by the mesh polygons. Another approach could be the use of pyramidal filters and "inpaiting" (see the work of Kraus and Strengert).
As ATI demonstrated with the subsurface scattering technique, it's possible to save some computations by discarding non-visible triangles in the render to texture passes using early-Z (see Applications of Explicit Early-Z Culling)
In the second rendering pass, we simply recover the normal stored in the render to texture surface, and that's it:
float4 GeomPS(GeomVS_Out In) : COLOR
{
float2 samp = tex2D(BakeSampler,In.UV.xy) * 2 - 1;
float3 normal_sharp = float3(samp, sqrt(1 - dot(samp,samp)));
...
}
Note: the main point is that there are a lot of different spaces we can express our computations into, often choosing the right one is the key of solving a problem, especially on the GPU where we are limited by its computational model. Don't take my implementation too seriously, it's just an experiment around an idea. Actually it's probably simpler to do the same in screenspace for example, devising a smart way to compute the blur kernel size, i.e. function of the triangle projected size (that can be estimated with the derivatives)...
The main idea is that we can compute normals easily in a pixel shader using ddx/ddy instructions... The problem of that technique is that we'll end up with real normals, not the interpolated ones that we need for Gouraud shading... To solve this problem we render the geometry in two passes. In a first pass, we render the geometry to a texture, then we blur that texture, and access it in the standard forward rendering pass as a normalmap...
Note that the same ddx/ddy technique can be used to compute a tangent base, that's especially useful if you don't have it, or don't have the vertex bandwidth for one... You can find the details of that technique in ShaderX 5 (Normal Mapping without Pre-Computed Tangents by Christian Schueler, the only catch is that there the tangent space is not reorthonormalized around the Gouraud-interpolated normal, but that's easy to do).
NormalBakeVS_Out NormalBakeVS(GeomVS_In In)
{
NormalBakeVS_Out Out;
Out.Pos = float4(In.UV * float2(2,-2) + float2(-1,1),0,1);
Out.NormPos = mul( In.Pos,WorldViewM);
return Out;
}
float4 NormalBakePS(NormalBakeVS_Out In) : COLOR
{
float3 d1 = ddx(In.NormPos);
float3 d2 = ddy(In.NormPos);
float3 normal = normalize(cross(d1,d2)); // this normal is dp/du X dp/dv
// NOTE: normal.z is always positive as we bake normals in view-space
return float4(normal.xy * 0.5 + 0.5);
}
The model should have a suitable UV mapping. That mapping, in order for this technique to work well should respect the following properties (in order of importance...):
- Two different point on the mesh should map to two different point in UV (COMPULSORY!)
- No discontinuties: UV mapping should not be discontinous on the mesh (note that if UV are accessed with wrapping, the UV space is toroidal...)
- No distortion: the shortest path between two points on the mesh should be the same as the distance in UV space up to a multiplicative constant
- Any point in UV space should map to a point on the mesh
Dicontinuties are hard to avoid, if present they can be made less obvious by passing to the normal baking a mesh that is extended across the discontinuities. For each edge in UV space, you can extrude that edge out (creating a polygon band around it, that will be rendered only for baking) overlapping the existing mesh geometry but with a mapping adjiacent to the edge in UV space...
The "non full" UV space problem (last point) is addressed by discarding samples, during the blur phase, in areas that were not written by the mesh polygons. Another approach could be the use of pyramidal filters and "inpaiting" (see the work of Kraus and Strengert).
As ATI demonstrated with the subsurface scattering technique, it's possible to save some computations by discarding non-visible triangles in the render to texture passes using early-Z (see Applications of Explicit Early-Z Culling)
In the second rendering pass, we simply recover the normal stored in the render to texture surface, and that's it:
float4 GeomPS(GeomVS_Out In) : COLOR
{
float2 samp = tex2D(BakeSampler,In.UV.xy) * 2 - 1;
float3 normal_sharp = float3(samp, sqrt(1 - dot(samp,samp)));
...
}
Note: the main point is that there are a lot of different spaces we can express our computations into, often choosing the right one is the key of solving a problem, especially on the GPU where we are limited by its computational model. Don't take my implementation too seriously, it's just an experiment around an idea. Actually it's probably simpler to do the same in screenspace for example, devising a smart way to compute the blur kernel size, i.e. function of the triangle projected size (that can be estimated with the derivatives)...
4 comments:
Nice trick, pity it doesn't look useful (does much more computation to save 12 or 4 bytes per vtx).
Still, points nicely to the whole ddx/ddy spectrum of tricks :D.
Early-Z/Hi-Z ... don't all current-gen games since 3 years ago use it ?!!
The "trick" with Hi-Z was related to the render-to-texture computation, and I've seen many games not to use that kind of optimization (see the ATI paper I cited).
The usefulness of that is limited, but it's not related to saving a few bytes per vertex, it's for when you don't have vertex normals at all (i.e. for procedurally generated mesh, for example, a cloth simulation).
But if the mesh is procedurally generated, there seems to be a problem with having a reasonable UV mapping. Any suggestions on how to handle that?
There are situations where you do have UVs, i said "cloth" for a reason...
Post a Comment