Since the infamous Carmack's interview on PCPerspective, (some of) the realtime rendering world has been rediscovering voxels (as point based rendering is something that we weren't doing yet anyway).
Noone tells us why. Why having less information (about topology) should be better than having more? Well, if you have so much data that you can't fit in memory, I can easily see the advantage, but that doesn't seem to be our problem as of now in most cases.
And weren't we all excited about DX10 geometry shaders exactly because we could have access to that kind of data?
I simply hate the hype. I hope that soon someone (more influential than me) says in an interview how cool Nurbs are, so we will be the two opposite ends of the hype, fully parametric surfaces versus raw 3d points.
The other (and related) hype is about raytracing techniques. I consider most of the realtime raytracing research to be dangerous for raytracing. Why we love raytracing? Because it allows to answer random visibility queries. Why we love to be able to do that? Because it enables us to use more refined methods of integrating the rendering equation. Faster ones, more adaptive, if you want. That still did become popular in the non-realtime world just a few moments ago...
Realtime raytracing research is mostly focused on the opposite direction, restricting the queries to coherent ones, so restricting also the effects that we can simulate to the ones that rasterization already does so well.
It seems that the only thing that you gain is the ability of rendering more accurate specular reflections, very, very, very slowly. Very useful, indeed, it's exactly the thing that artists ask me to bring them all day...
P.S. That was unfair, in fact just the ability of computing per pixel shadows in a robust way, without having to mess with shadow map parametrizations etc, is a very nice feature. But it's not enough.
It seems that the only thing that you gain is the ability of rendering more accurate specular reflections, very, very, very slowly. Very useful, indeed, it's exactly the thing that artists ask me to bring them all day...
P.S. That was unfair, in fact just the ability of computing per pixel shadows in a robust way, without having to mess with shadow map parametrizations etc, is a very nice feature. But it's not enough.
19 comments:
Here is some currently popular opposite of voxels hype for you as a counter-balance ;-)
Voxels were very popular in the days of isometric RTS, and Sim City pre 3d.
Everything old is new again in games and graphics.
Oh no, but by "popular" I mean hyped, as I said even point based rendering was well known and reserached in the rendering community before the recent interview that Carmack had, still it was not hyped...
I wouldn't be that harsh, John Carmack has done some amazing things in the past (not many can put something on screen that makes you think "no, this can't be done" and then doing it again and again) ,it's normal having a lot of people that take very seriously everything he says. At Siggraph they will also present their ray casting on octrees stuff, so I guess we will soon have a lot of material to talk about :)
Ray tracing is also this brand new thing (no please, don't laugh!) and it doesn't matter if it makes sense or not, some really like to fill their mouths with those words (and most of them, let me guess, are not game rendering engineers..) and it's naturally going to stay with us, till the hype and the non sense fade away and it becomes just another trick in the bag.
Marco
Carmack of course has all my respect. And some hype, on a non technical website, is of course going to be expected at each new buzzword.
I'm looking forward the Siggraph stuff (I think you refer to OToy, right?), I think it's very intresting tech.
But my point is, there are a lot of very intresting technologies, and most of the stuff that is hyped now is not new at all.
As it's not new at all, I would like, even before seeing the actual implementations, that someone could explain me why we do like those ideas...
Which problems are they supposed to solve?
We'll see...
Dunno if Otoy is going to be presented at Siggraph, but Jon Olick (id software) will supposedly present some recent work based on raycasting on sparse voxels implemented with CUDA.
Unfortunately I won't be able to attend Siggraph this year :(
Marco
Oh so it's really going to be the year of the raytraced voxels...
We'll see how wrong I was then.
deadc0de, did you see the new Ruby demo from ATI, which is using voxel based rendering and does raytracing for reflections (cars, windows)? I think the quality of those graphics justifies the recent hype around voxels and raytracing until someone can show similar quality graphics with rasterization. I haven't seen anything like it been done in realtime on consumer hardware. Not even the city in CryEngine2 (the realtime remake of the Bravia ad from GDC08, it's on the crytek website), while very impressive comes close imo.
Yes I saw it. I think there's nothing in it that could not be rendered with polygons. And without raytracing. The only thing that is really impressive is the dof blur.
What do you see in that demo that is not renderable with standard techniques?
Also, it seems that they use multiple GPUs to render that...
I agree that the demo could probably be done entirely with polygons and without raytracing. My point is that no one has done it yet. Also, there must be a good reason why id is investigating other rendering approaches.
I don't really understand your dislike for realtime raytracing. If games can benefit from hybrid rendering (as opposed to rasterization alone) and achieve some spiffy raytraced effects without performance suffering, I don't see the problem.
I love raytracing, in fact, I probably know more about it than rasterization. I hate the hype, I think it's not helping, Intel that showcases crappy demos of quake 3 with reflective spheres all around...
Of course it would be great to have the ability of tracing rays in our shaders. It would be great. I don't think it's ever going to happen, hybrid systems are very complex to design, that's also why REYES (i.e. Photorealistic RenderMan) does not mix well with raytracing.
My post was only intended to let people think over the hype and the buzzwords. Think about the pro and cons of that technology. Think about what it's going to bring us. What we could do with it, and what's going to be more difficult instead.
As of now, noone told me where are the advantages, the real ones, for realtime rendering, that raytracing is going to deliver, now or in the near future. I don't see them I think they don't exist, raytracing started to be nice in non-realtime world just recently, I think that in the realtime one we are still so far away from the level of complexity that makes raytracing a better solution to the rendering equation than rasterization based approximations.
Plus there are some obvious contraddictions in different hype thrends... I pointed out the DX10 geometry shaders vs the voxels (or point clouds) one, I could also say fully dynamic, phyisics based world vs static kd-tree based rendering... Some hypes do not mix with others, so we have to make a choice, at least, be coherent...
Intel has indeed spread a lot of disinformation when it comes to raytracing.
Why do you think that hybrid renderers are complex to design? This sort of misconceptions is exactly what Intel and raytracing purists want people to believe. In reality, hybrid renderers already exist and they work fine (e.g. Rapidmind and Otoy).
I second that: kill the hype and use your head.
A few points:
1. John Carmack has taken the odd wrong turn in the past... well a lot less than the rest of us obviously. :)
2. Raytracing... is simple. Simple as in people can see how a simple algorithm solves problems that otherwise would require additional techniques: shadows, translucency and fancy reflections to mention a few. So I think people are lured by this false sense of simplicity and desire for all their problems to go away.
3. Raytracing vs Rasterisation is a big O problem which is (almost) always a win for rasterisation. It's surprising how many people are willing to ignore that...
straaljager: hybrid renderers are very hard to design because fundamentally raytracing and rasterization correspond to two different order of traversal of the scene, and so everything that is an optimization (i.e. improves coherency etc) for the former is not for the latter.
Designing a hybrid renderer imposes a lots of limitations.
Not only, the major advantages of the two techniques do not mix well.
I.e. it's hard to make integrate a shader system ala renderman (or hlsl/cg etc) with a raytracer. Vertex shaders would require bounds on the maximum displacement or to execute them for all surfaces before tracing rays. Surface shaders limit the ability of the raytracer to shoot rays with importance sampling, and are a serious problem for many other nice techniques. That's also why the G.I. implementation of PRMan works by gathering cached data, and not (mostly) by shooting rays inside the shader, that was a wise choice but still it's limiting compared to other systems
straaljager: ah, and AFAIK "rapidmind" is just a language for CPU and GPU, not a hybrid renderer at all. And about OToy... I've found no evidence that's a hybrid renderer, they always say "no polygons" and "raytracing", there are some screenshots that hint at the use of shadowmaps in an older demo, but that's all... Anyway even if Otoy or whatever is a hybrid renderer, I've never claimed that it's impossible to do so (PRMan and Mental Ray are hybrid renderers for example), just that it's difficoult...
yours truly: mind, I never said that Carmack is stupid or that he saied a wrong thing. He DO have a point. So much that the thing he wants to achieve it's not his invention at all. It's called point based rendering, and it IS a nice thing. But it's nothing to hype, because it's good (as of now) only in some settings that (as of now) do not matter too much for mainstream realtime rendering.
I think that his idea was, polygons are becoming smaller and smaller, if we end up rendering 1 pixel ones most of the time (and this IS a serious problem, because GPUs are optimized for polygons that are at least 2x2 pixels big), then there's no advantage in that representation. And this is right, and it's also a nice idea to think about raytracing them, as current GPUs won't handle those kind of small entitites gracefully (but they could be used as GPGPUs, as OToy seems to be doing). All that is ok, but it's not new, it's a research field, active, with papers, with conferences etc...
But NOONE says that! No one seems even to aknowledge the existence of such techniques independently from the Carmack's interview!
Out of idle curiosity, have you revised your opinion on these matters at all? There seem to be advantages to both raytracing and rasterization, and voxels are nice for landscapes and ZBrush models while polygons handle animation and operations requiring more detailed topology. Is there a reasonable hybrid approach that you know of?
dja: no I haven't yet seen anything that could let me change my mind.
People say that voxels are good for landscapes just because they didn't find a way to animate them... But where is the advantage over rasterization, for landscapes?
Detail? Don't think so, think tessellation + displacement. Same thing about zBrush really, it's made for displacement, voxels would only render more slowly, without animation, with less quality (hard to compute the normals...).
Faster, they are not faster. More detail? As I said, no, and another proof, it so, why non-realtime 3d does not dance with voxels? Is there a single voxel-editing 3d package out of there? What are we talking about? Hype, my friend, still hype.
I understood from Jon Olick's Siggraph 2008 talk that rasterization is limited short-term by serialized triangle setup processing so that it cannot show the same level of detail in real time as his raycast approach until new manufacturing processes are used (3D lithography - which seems like it might be a Good Thing).
Another takeaway for me is that the raycast sparse voxel octree performance is independent of the artistic content relative to rasterization approaches. Id has already used megatextures.. this adds geometry. I don't think his demo made use of the traditional shadowing or lighting features of raytracing, though. Perhaps raytracing is more of an Intel goal to play to Larrabee's strengths.
The presentation also says that content creation would be simplified since there is no need for BSPs/portals, tweaking triangles for LOD, or UV maps. Ray approaches inherently sample the geometry and octrees are like a mipmap for geometry. Rasterization needs extra work for all this.
I think there are quite a few devils in the details (thin-walled materials, dynamically updating the voxel data structure), but it seems to have some merit.
dja: triangle setup cost, that's surely true. How to avoid that? We already have the answers. Reyes, or point rendering...
About content creation, that's a myth. Topology is important, both in rendering and modeling. That's why voxels are not used... anywhere!
Last, about complexity. Raytracing is not faster than rasterization, nor it does scale better. Spatial subdivision structures do exhibit good scalability respect to geometry, but they're usable in both worlds. We don't do that because we don't need it. Simply games do not have that much z-complexity, or we already have culling methods that are simpler and faster.
Post a Comment