Search this blog

17 March, 2008

More on Raytracing

UPDATE: after a lot of discussions, I've moved a part of my previous post in this new one, and rewrote it to be more clear.

Recently Carmack talked about raytracing as well, but I haven't found that discussion to be any intresting. Everyone talks about raytracing, but noone really persuaded me with their way of looking at it. So that's my point of view:

Raytracing is more decoupled from geometry. But it is false that is has a lower algorithmic complexity. It's easy to see how raytracing vs rasterization ends up only in navigating the screen-object database in a different order. And this still holds true if we use spatial hiearchies. With a KD-Tree for example raytracing, for every pixel traces a ray through the tree, so it's numPixels*log(numObjects). Rasterization can, for each bounding box in the KD-Tree, draw it and see if it's visible against the Z-buffer or not, if it is, it can recurse. If the KD-Tree fills all the screen then it will draw numPixels pixels at each KD-Tree level, so it's still numPixels*log(numObjects). That's all.

Still in practical implementations raytracing is better with that, implementing such a system with a raytracer is way easier and it's way easier to be cache-coherent. The visibility query time does not depend very much on the depth complexity of the scene along that direction. Problem is that this is achieved with data structures that are not suited for dynamic objects.

Raytracing permits to perform visibility queries from arbitrary directions. This allows a great flexibility in lighting and shading algorithms. Rasterization is limited to coherent queries on the image plane. Not being able to choose arbitrary directions limits the kind of shading you can do (i.e. no curved reflections) and limits the kind of importance sampling you can do! Raytracing directly samples the rendering equation integral, and everything into it translates into a nice, uniform importance sampling problem. This has a huge advantage, because the complexity of the computation is not the sum of the complexity of each subsystem, but sampling decisions can be made across them. For example, if you have a thousands of lights and motionblur, you can distribute samples that choose different light influence and different times in a coupled way. And you can do importance sampling. Try rendering a thousands of lights with shadowmaps!

Realtime raytracing needs coherency. Problem is that those arbitrary queries tend to be very cache unfriendly, and even if we do have some memory coherent data structures, the more you have scattered visibility queries the less efficient your raytracer will be. So we have more flexibility, but to be fast, we have currently to restrict ourselves to rasterization-like queries! So there's no point in using those kinds of realtime raytracing, if we eventually fall back in the situration that is mostly appealing to rasterizers. We don't have yet enough power to simulate the kind of effects raytracing is very good at. We barely have it for offline rendering as of now, the way to realtime is still very long!

Rasterization is very cache friendly because we traverse the geometry, after culling, in a linear fashion, we write pixels in a random order, but usually there's a lot of locality anyway. We only have to keep in memory the framebuffer (and not all of it, with predicated tiling) and then we can stream data into it as we process our geometry. It's very nice, it's very efficient, problem is, we are restricted to a given kind of visibility queries, and there's no way of escaping that.

Scattering or Gathering? I don't see a clear answer. Raytracing has a lot of potential, and it's a more powerful algorithm, directly linked to the solution of the rendering integral. But as of now, even offline renderers do not harness its true power (even if things are going towards the right way). Realtime raytracing is slow, it's limited to rasterization-like effects, has no general solution for dynamic scenes and does not integrate well with shaders too (why? because if you have programmable shaders then you can't shoot secodary rays without evaluating them, and ray shooting can be arbitrary and depend on other rays, so it's hard to generate the large number of rays that are needed for memory coherent raytracing to work well).

Probably some more investigation has to be made in a more unified framework, raytracing is a gathering operation, rasterization is a scattering operation. There are a few attemps of mixing the two, it's very hard (that's why REYES does not mix well with Raytracing), and most hybrid schemes just use rasterization to shoot a large number of coherent rays on a plane (see this that is a nice example of the more general idea of Keller's raybundle tracing) but there's nothing out there now that can be seen as a solution. We'll see.

No comments: