It shouldn't be news that the quality of publications done at Siggraph has been going progressively worse, in the last years, and other conferences are becoming more and more important, Eurographics, but also Siggraph Asia (see for example, those really neat publications).
A lot of politics, sponsors, pressure from universities, younger reviewers... Nowadays Siggraph is more an occasion to meet people and see what's going on in the industry, than a showcase of the best graphic research on the planet.
I didn't see anything groundbreaking, and a lot of publications were addressing problems that are not, in my view at least, so crucial. Still I don't think this year Siggraph's was bad, and you'll find plenty of coverage of the event online, so I won't write about that.
I have the impression that the non-realtime rendering, and GI in particular, has seen a slowdown recently, but it may also be that my interest shifted away from those subjects, so I don't have a good picture.
To me what's more interesting, at least now, is realtime graphics, and I'm probably more sensitive to publications in that field. At the main conference unsurprisingly, the most exciting realtime 3d presentation was done by Crytek (see this) but I was really looking forward for the papers of the HPG, one of the Siggraph's side-conferences.
Generally, there is some pretty good stuff there, like the Morphological Antialiasing paper, and many others... But you have to filter out the buzz and I was really bothered by some papers that, in my opinion, simply should not have been there. I don't know really why they bother me, probably it's also because I've seen published some ideas in the past, that I didn't bother to publish thinking they would have been rejected anyway, or maybe it's just that I have too many friends in the research community with good ideas and little luck.
Hardware-accellerated Global Illumination by Image Space Photon Mapping. Wow! Let's read...
And what's that? Well, if you've followed any GPU GI research in the last 1-2 years, it's really easy. They're using a RSM for the first-hit of the lights, they read it back in CPU and use that data for normal photon tracing (claiming that the slowest part is the first-hit, so they care about doing only that in GPU), then they splat the photons using... photon splatting.
It's mostly a tech-demo, it would be cool if they published it as such, maybe with better assets it could be a worthy addition to the other demos NVidia has. Maybe they could have published this applied research in NVidia's GPU Gems. But Siggraph?
Why "image space" anyway? And the worst part, why they don't say "RSM" or "splatting"? They cite those works as "related research", and that's it. They don't use any of those terms, they replaced everything with something else that makes it sound better and new... Photon splatting sounds slow, let's use "Image Space", is way more cool (doesn't matter if there's nothing happening in image space there). RSM are well-known... let's call them... bounce maps (genius)!
Image Space Gathering. Even worse! And it came just after the previous one in the HPG conference! It's something really minor, the only application seems to be blurry reflections, and from the images it doesn't look so nice for that either.
The algorithm? Render your image, and then blur it. But hey, preserve the edges using the Z-buffer, and make your kernel proportional to that too. Wow! Don't blow my mind with such advanced shit man!
They say "image space" and "gathering" and in the abstract,they also use "cross bilateral filtering". The idea is simple and little more than a not-so-neat trick, a curiosity with limited applications. But there's the buzz!
I think it would be easy to write a buzz-meter, check for the frequency of some keywords in the abstracts and build a filtering system that intelligently filters all the noise...
A lot of politics, sponsors, pressure from universities, younger reviewers... Nowadays Siggraph is more an occasion to meet people and see what's going on in the industry, than a showcase of the best graphic research on the planet.
I didn't see anything groundbreaking, and a lot of publications were addressing problems that are not, in my view at least, so crucial. Still I don't think this year Siggraph's was bad, and you'll find plenty of coverage of the event online, so I won't write about that.
I have the impression that the non-realtime rendering, and GI in particular, has seen a slowdown recently, but it may also be that my interest shifted away from those subjects, so I don't have a good picture.
To me what's more interesting, at least now, is realtime graphics, and I'm probably more sensitive to publications in that field. At the main conference unsurprisingly, the most exciting realtime 3d presentation was done by Crytek (see this) but I was really looking forward for the papers of the HPG, one of the Siggraph's side-conferences.
Generally, there is some pretty good stuff there, like the Morphological Antialiasing paper, and many others... But you have to filter out the buzz and I was really bothered by some papers that, in my opinion, simply should not have been there. I don't know really why they bother me, probably it's also because I've seen published some ideas in the past, that I didn't bother to publish thinking they would have been rejected anyway, or maybe it's just that I have too many friends in the research community with good ideas and little luck.
Hardware-accellerated Global Illumination by Image Space Photon Mapping. Wow! Let's read...
And what's that? Well, if you've followed any GPU GI research in the last 1-2 years, it's really easy. They're using a RSM for the first-hit of the lights, they read it back in CPU and use that data for normal photon tracing (claiming that the slowest part is the first-hit, so they care about doing only that in GPU), then they splat the photons using... photon splatting.
It's mostly a tech-demo, it would be cool if they published it as such, maybe with better assets it could be a worthy addition to the other demos NVidia has. Maybe they could have published this applied research in NVidia's GPU Gems. But Siggraph?
Why "image space" anyway? And the worst part, why they don't say "RSM" or "splatting"? They cite those works as "related research", and that's it. They don't use any of those terms, they replaced everything with something else that makes it sound better and new... Photon splatting sounds slow, let's use "Image Space", is way more cool (doesn't matter if there's nothing happening in image space there). RSM are well-known... let's call them... bounce maps (genius)!
Image Space Gathering. Even worse! And it came just after the previous one in the HPG conference! It's something really minor, the only application seems to be blurry reflections, and from the images it doesn't look so nice for that either.
The algorithm? Render your image, and then blur it. But hey, preserve the edges using the Z-buffer, and make your kernel proportional to that too. Wow! Don't blow my mind with such advanced shit man!
They say "image space" and "gathering" and in the abstract,they also use "cross bilateral filtering". The idea is simple and little more than a not-so-neat trick, a curiosity with limited applications. But there's the buzz!
I think it would be easy to write a buzz-meter, check for the frequency of some keywords in the abstracts and build a filtering system that intelligently filters all the noise...