Even shadows can't be considered solved for any but the simplest kind of lightsource (directional or sunlight, where using Cascaded Shadow Maps seems to be a de facto standard nowadays).
It looks like we have a pletheora of techniques, and choosing the best can be a daunting. But it you look a little bit closer, you'll realize that really, all those different lighting systems are just permutations of a few basic choices. And that by understanding those, you can end up with novel ideas as well. Let's see.
Nowadays you'll hear a lot of discussion around "deferred" versus forward rendering, with the former starting to be the dominant choice, most probably as the open world action-adventure-fps genere is so dominant.
The common wisdom is that if you need a lot of lights, deferred is the solution. While there is some truth in that statement, a lot of people accept it blindly, without much thinking... and this is obviously bad.
Can't forward rendering handle an arbitrary number of lights? It can't handle an arbitrary number of analytic lights, true, but there are other ways to abstract and merge lights, that are not in screen space. What about spherical harmonics, irradiance voxels, lighting cubemaps?
Another example could be the light-prepass deferred technique. It's said to require less bandwidth than the standard deferred geometry buffer one, and allow more material variation. Is that true? Try to compute the total bandwidth of the three passes of this method compared to the two of the standard one. And try to reason about how many material models you could really express with the information light-prepass stores...
It's all about tradeoffs, really. And to understand those, you have first to understand your choices.
Choice 1: Where/When to compute lighting.
Object-space. The standard forward rendering scenario. Lighting and material's BRDF are computed (integrated) into a single pass, the normal shading one. This allows of course a lot of flexibility, as you get all information you could possibly want to perform local lighting computation.
It can lead to some pretty complicated shaders and shader permutations as you keep adding lights and materials to the system, and it's often criticized for that.
As I already said, that's fairly wrong, as there's nothing in the world that forces you to use analytic lights, that require ad-hoc shader code for each of them. That is not a fault of forward rendering, but of a given lighting representation.
It's also wrong to see it as the most flexible system. It knows everything about local lighting, but it does not know anything about global lighting. Do you need subsurface scattering? A common approach is to "blur" diffuse lighting, scatter it on the object surface. This is impossible for a forward renderer, it does not have that information. You have to start thinking about multiple passes... that is, deferring some of your computation, isn't it?
Another pretty big flaw, that can seriously affect some games, is that it depends on the geometric complexity of your model. If you have too many, and too small triangles, you can incour in serious overdraw overheads, and partial-quads ones. Those will hurt you pretty badly, and you might want to consider offloading some of all your lighting computations to other passes for performance reasons. On the other hand, you get for free some sort of multiresolution ability, and that's because you can split easily your lighting between the vertex and pixel shaders.
Screen-space. Deferred, light-prepass, inferred lighting and so on. All based on the premise of storing some information on your scene in a screen-space buffer, and using that baked information to perform some of all of your lighting computations. It is a very interesting solution, and once you fully understand it, it might lead to some pretty nice and novel implementations.
As filling the screen-space buffers is usually fast, with the only bottleneck being the blending ("raster operations") bandwidth, it can speedup your shading quite a bit, if you have too small triangles leading to a bad quad efficiency (racap: current GPUs rasterize triangles into 2x2 pixel sample blocks, but quads on the edges have only some samples inside the triangle, all samples get shaded, but only the ones inside contribute to the image).
The crucial thing is to understand what to store in those buffers, how to store it, and which parts of your lighting compute out of the buffers.
Deferred rendering chooses to store material parameters and compute local lighting out of them. For example, if your materials are phong-lambert, then what does your BRDF need? The normal vector, the phong exponent, the diffuse albedo and fresnel colour, the view vector and the light vector.
All but the last are "material" properties, the light vector depends on the lighting (surprisingly), we store in the "geometry buffer", in screenspace the material properties, and then run a series of passes for each light, that provide the last bit of information and compute the shading.
Light-prepass? Well, you might imagine even without knowing much about it, that it chooses to store lighting information and execute passes that "inject" the material one and compute the final shading. The tricky bit, that made this technique not so obvious, is that you can't store stuff like the light vector, as in that case you would need a structure capable of storing in general, a large and variable number of vectors. Instead, light-prepass exploits the fact that some bits of light-dependent information are to be added together in the rendering equation for each light, and thus the more lights you have the more you keep adding, without needing to store extra information. For phong-lambert, those would be the normal dot view and normal dot light products.
Is this the only possible choice to bake in screenspace lighting without needing an arbitrary number of components? Surely not. Another way could be using spherical harmonics per pixel for example... Not a smart choice, in my opinion, but if you think about deferred in this way, you can start thinking about other decompositions. Deferring diffuse shading, that is the one were lighting defines shapes, and compute specular in object space? Be my guest. The possibilities are endless...
But where deferring lighting into multiple passes really shows its power, over forward rendering, is when you need to access non-local information. I've already made the example of subsurface scattering, and also on this blog I've talked (badly, as it's obvious and not worth a paper) about image-space gathering, that is another application of the idea. Screen-space ambient occlusion? Screen-space diffuse occlusion/global illumination? Same idea. Go ahead, make your own!
Other spaces. Why should we restrict ourselves to screen space baking of information? Other spaces could prove more useful, especially when you need to access global information. Do you need to access the neighbors on a surface? Do you want your shading complexity be independent of camera movements? Bake the information in texture space. Virtual texture mapping (also known as clipmaps or megatextures) plus lighting in texture space equals surface caching...
Light space is another choice, and shadow mapping is only one possible application. Bake lighting and you get the so called reflective shadow maps.
What about world-space? You could bake the lighting passing through a given number of locations and shade your object by interpolating that information appropriately. Spherical harmonic probes, cubemaps, dual paraboloid maps, irradiance volumes are some names...
Note about sampling. Each space has different advantages. Think how you can leverage them. Some spaces for example have some components that remain constant, while they would vary in others. Normalmaps are a constant in texture space, but they need to be baked every frame in screenspace. Some spaces enable baking at a lower frequency than others, some are more suitable for temporal coherency (i.e. in screenspace you can leverage on camera reprojection but other spaces you could avoid updating everything every frame). Hi-Z culling and multi-resolution techniques can be the key to achieve your performance criteria.
Ok, that's enough for now.
Ok, that's enough for now.
Next post I'll talk about the second choice, that is how to represent your lighting components (analytic versus table based, frequency versus spatial domain etc...) and how to take all those decisions, some guidelines to untangle this mess of possibilities...
Meanwhile, if you want to see a game that actually mixed many different spaces and techniques, to achieve lighting I'd suggest you to read about Halo 3...
Meanwhile, if you want to see a game that actually mixed many different spaces and techniques, to achieve lighting I'd suggest you to read about Halo 3...
No comments:
Post a Comment