I've said in a previous post that pixel shaders have automatic LOD, in the sense that far away/small objects usually fill a small part of the screen, so there is an implicit LOD, done by the perspective projection. And this is true.
The problem is that small, far away objects, due to the same perspective projection, tend to be the vast majority of the objects we draw. If we have many different shading techniques, this also means that they cause the vast majority of state changes - pipeline stalls. And that's bad.
How to solve that? Replace shading of far away objects with a "maximum common denominator" technique, so all those objects can be grouped togheter and do not issue state/shader changes.
We have a shader baking that we use to bake shaders in a distance into to low-res diffusemaps, normalmaps and other texture channels.
Works well, esp. on the consoles with fixed resolution, and is a huge saving in esp. with the number of draw calls but also with pure vertex & pixel shader cost.
Very nice. How do you bake the shader? I mean its properties are going to depend on view, light dir etc, are you trying to fit any shading model to a given one, or you just have knowledge of what a given shader does an fit its data into the common denomiator shader via a hardcoded logic?
Sometimes we pre-render the far objects into textures and just display them. Combined with streaming it's a technique that save a lot of gpu cycles. TiZ
Mhm from what I can understand probably repi was talking about the same technique, I.E. not baking stuff for the geometry but doing something impostor-like, stamping very distant objects, color and normals into some "skydome" textures, computed offline. But this is not only pixelshader LOD, it's an entirely different matter!
Post a Comment