For now, I'll just link a post from Tom Forsyth on the topic. His summary is nice, but I wanted to point out a couple of things:
- He says that rasterization has a simple way to discard stuff in its inner loop, i.e. pixels. That's only partially true. It's simple when you don't deal with depth, i.e. discarding stuff to avoid overdraw. If you have to account for depth, then you need a zbuffer, a zbuffer pyramid, and checking fragments against the pyramid, the rasterizer itself becomes more complex etc. It's really not easy and the GPUs are really optimized nowdays to do that, as overdraw was becoming a major problem. If you account all those techniques that are needed to properly cull triangles in a rasterization renderer, a KD-Tree raytracer does not seem to be any harder. In my opinion, usually, raytracers tend to be easier to write than rasterizers.
- REYES is used only by a couple of offline renderers. Most notably, by Photorealisic Renderman. PRMan has a huge success, but it's only one product, and I doubt that its success is any more linked to REYES. I guess that it's so used in high end productions due to its proven stability, robustness and scalability in first place. And both PRMan and Mental Ray are hybrid, they use some form of rasterization only for the "first hit" (first visible surfaces) and then they raytrace the other hits. Rasterizing the first hit is nice not only for the speed, but also to compute antialiasing, gradients etc...
- When complex stuff is thrown into, rasterizer complexity tend to grow faster than raytracers. I.E. reflections, shadows etc. That could be a nice reason alone to experiment with RTRT in nextgen GPUs. Raytracing is kinda suited to parallel execution, expecially modern memory-coherent ones. Dunno if it is THE future, but I won't discard it so easily.
No comments:
Post a Comment