Object vertex shaders. Features:
- Dependant on the mesh, decoupled from the pixels (this means that you can have different shading densities on your model!).
- Does not automatically LOD (but it's usually easy to implement LOD, lots of possible choices)
- Can alter the geometry (and topology, on DX10 hardware, via geometry shaders).
- Caching is dependant on the vertex ordering.
- Culling is not automatic (but with occlusion queries, it's not harder anymore than the one, highly refined, provided by the rasterizer with z-buffering, early-z, hierarchial-z...).
- Outputs are interpolated on the mesh surface in a linear (and perspective corrected, for texcoord interpolators) fashion.
- Can access neighbors data only on DX10 hardware (and on DX9.5, read, Xbox360).
Material pixel shaders. Features:
- Dependant on the pixels, decoupled from vertex complexity (usually)
- Does automatically LOD (as LOD space usually is screen space, smaller objects use less pixels, thus less shading).
- More LOD can be applied (but it's not easy, i.e. scaling shading features for far away objects, that still can fill a good percentage of the screen).
- Overdraw is a big problem.
- Can access to neighbors data (via differential functions, limited to the currently drawn primitive).
- Powerful access to external data (via textures and samplers, this is also true for vertex shaders in unified shader APIs/hardware, i.e. DX10).
- Not coupled with objects (no need to cull, but that means also no overdraw).
- Not coupled with materials, no need to write the same code in all the material shaders.
- Outputs AND Inputs are coupled with the screen pixels (reading data from a screen sized buffer is hugely more expensive than getting the same data from vertex shader interpolators, as inputs are usually way more than the outputs, and have to be written by material pixel shaders too, that easily becomes a bandwidth problem).
- Can randomly access to neighbors data.
- Low-frequency effects can be subsampled (kinda easily).
- Can be difficult to LOD (usually only if you have access to dynamic branching, with all its limits).
- Does not work with antialiasing.
- Can only "see" the first layer of visibility - limited geometrical information can be reconstructed. (eye first-hit in raytracing terms, could be solved with more buffers and depth peeling but it's expensive).
- Precision problems (in inputs, expecially with older hardware that does not have floating point textures and when reconstructing world space positions from zbuffer, tip: always use wbuffers, it's both faster and more accurate to reconstruct world space from them).
Anti aliasing can be used with screen space effects on newer hardware. (Definitely dx10.1, not sure about dx10)
I tryied not to link that post to a given hardware, of course that list changes as hardware do, and it's already inaccurate if you have to factor in some older hw. As for the antialiasing in screenspace effects, that requires to resolve the supersampled z-buffer and to do your effects on it, it's possible but very expensive, so what I wrote still mostly applies.
Post a Comment