Search this blog

14 March, 2008

Realtime radiosity?

Nice. But how?

A really smart coworker of mine noticed that some pillars in the demo where being lighted as a whole, like with a per-object coefficient, as the light changed. Plus, reading their technology page and how the system integrates with direct lighting... Mhm.

They seem to precompute visibility at given points around the scene (linked to objects? to vertices? to uniquely mapped textures? probably the latter is true). Then they take all the direct illumination plus shadows from a big lightmap of the entire level (how do they get an unwrap of the whole scene? is that computed dynamically based on the objects you see? like packing lightmaps of various visible objects into a big one? dunno, but you have to bake your direct illumination, even if it's dynamic into lightmaps, it seems for this to work). So probably, they're computing the amout of lighting for each sample point by gathering visible light texels from the lightmaps (so are they expressing visibility directly in the lightmap uv space? are they limited only to one bounce? probably).

I'm just guessing. I could be completelly wrong. But still I'm curious. If they're doing something like that, do they have a lod on the number of sample points? How much memory does each point require to store the precomputed visibility? Are they able to gather more than a single bounce? As they talk of precomputed visibility, surely they're not going to handle dynamic worlds... Still it's kinda nice.

Shame that you have to bake your dynamic lighting into lightmaps, that sounds expensive, even because you can't limit that to visibile surfaces (as the ATI skin subsurface scattering demo does for example, by a neat use of the early-z rejection) but you have to do that for each object/surface that could influence the scene lighting...

No comments: