Search this blog

03 February, 2009


Finally I've found some time to read a few publications from recent and not-so-recent conferences...

Cuda this, GPU that, it seems that most of the effort is spent in finding ways of adapting old algorithms to the GPU, even in fields were the GPU computation model (at least, of this generation GPUs, who knows about the future) does not apply very well.

Dunno, maybe since I've left university and started to work in the gaming industry, I've got too pragmatic... Still there are things worth reading, Approximating Dynamic Global Illumination in Image Space was really expected, SSAO ported to diffuse global illumination. Point-based approximate color bleeding by Pixar is more exciting, a realtime technique that gets implemented/mutated into the most highend, and thus stable, offline rendering engine.

If you planned to impress your friends with some realtime fluids computed on the GPU,
Real-Time Fluid Simulation using Discrete Sine/Cosine Transforms is a realtime, frequency space approach (like the uber famous Stable Fluids, by Jos Stam), with boundary conditions.

If you use spherical harmonics, and your game has day/night cycles,
Efficient Spherical Harmonics Lighting with the Preetham Skylight Model might be nice, even if you'd probably have to update the skydome map anyway, and you still should have plenty of time to do it in a slow way over a number of frames...

I've already encountered the work of
Ladislav Kavan (Dual quaternion skinning) and even exchanged a couple of mails with him while I was researching on quaternion skinning for my crowd renderer. Nice guy, and very intresting reseach. Animation is a field that I don't really know in depth, but for sure it's showing some good progress, and it's still something where a lot of improvement is possible, even right now, in practical applications. Physics based is the future!

Ok, so, here is where I wanted to talk about how I was surprised to find this after I knew this, and how I was happy to see that people are actually investing in single-ray SIMD raytracing structures, instead of fast but useless ray packets ones. Then I planned to crosslink two interesting posts from the level of detail blog, to show the work of Tobias Ritscher, to then conclude with Progressive Photon Mapping, explaining a lil bit the global illumination problem
(the title of the post was offline as in offline rendering), the magnificent work of Eric Veach versus the practical intuition of Jensen, and how he looks to me more like a coder than a reseacher, and explain the simplicity and beauty of a couple of his ideas (even if in general, I don't like photon mapping). As a postscript, I wanted to remark how bad are papers that don't provide any insight on the downsides of the presented algorithm. In PPM there are a few that appear to be pretty obvious (memory impact of having all that information on the hit points, that scale with the number of pixels in the image, dealing with aliasing and how it does compare to path tracing under less extreme conditions). But my EEEPC keyboard is failing, it started with the CTRL key a long ago, and I've replaced it with the right windows key... then a couple of function keys, and now the return and the arrows trigger the delete button... So I have to stop, anyway, it was really a boring post, I'll probably edit it or delete it later, on a decent computer...


asdf said...

Ray packets useless? Are you kidding?

DEADC0DE said...

I don't see any good in ray packets, they seem to me just to be an expression of the confusion state that RTRT research is in nowadays.

Ok that's totally unfair, researchers do a great job, they investigate great and not great ideas, and it's obvious that most of them are impractical, that's the whole point of the research, for practical stuff there's applied research done by software companies...

So why raytracing is cool ( We have to decide!

It can be cool because we can shoot arbitrary rays, integrate over paths and do global illumination and a lot of cool effects easily... for sure it's not cool to render stuff that exhibits a lot of coherency, because for that rasterization easily kills it.

There was this naive idea that raytracing was cool because each ray was independent and so it provided the opportunity for plenty of parallelism. And I guess that for years rasterization gurus were laughing at that, while developing GPU chips that did accomplish incredible results exactly with the opposite principle. Parallelism is cool when there's a lot of coherency, because memory is painfully slow.

Now CPUs are following the "GPU path" too, so it's evident that streaming architectures are for the win. And so RTRT research shifted towards them, trying to add back a lot of coherency to their incoherent queries... But then you get into the field where rasterization wins easily, and so it does. Now with bizzarre algorithms (raytracing simplicity?) and a cluster of CPU/GPUs you can do something that rasterization did years ago, more slowly, but hey, look at those totally cool chome spheres!

Again I'm unfair, of course ray packets are intresting, and of course raytracing research on GPUs is, because maybe we'll find some sort of middle ground, maybe we'll find a good mapping of raytarcing to the streaming paradigm.

But as of now and here ray packets are hyped and useless, ray tracing is still cool for offline rendering, I don't see much research in that anymore, and it's cool to hear about data structures geared towards SIMD and incoherent rays, that's to say, current processors and current raytracing applications. That's _useful_

asdf said...

Wow, your reply could have been a full blog post :) But well, I'm in the process of becoming a RTRTracing guy, because of my graduation project. From practice, I know that there are applications in the RTRTacer I'm working on where ray packets incredibly increase performance. And applications where they don't. Anyway, I belive things will get really interesting with Larrabee, with hybrid rasterization/RTRT things. We'll see.

DEADC0DE said...

That's cool, the more RTRT researchers there are, the more I'll probably see something that will change my mind on the status of the research :)