Search this blog

28 March, 2008

Bad code is a virus

You did not document your system. You were in a rush and hacked the implementation. It happens even in the best families. Then someone else comes, and has to use that system. He does not find any documentation, so he searches in the code. It finds how you used it, copies and pastes. Code is documentation. Bad code spreads.
Always comment bad code (hacks etc). Always. Remove it at the end of the project. If you're the user, always ask if you don't find the documentation. Do not ever copy and paste code you don't understand/you're not sure of, do not trust code.
We can win, with your help, bad code can be eradicated.

27 March, 2008

C++ is too hard (for me)

I'm going, for the second time, through the book "C++ Coding Standards", a classic C++ book by Alexandrescu and Sutter. "101 Rules, Guidelines and Best Practices". And noone of them is unreasonable.

C++ is a huge language, it does a terrific job of giving to the programmer all the control he needs in order to predict exactly how the machine will execute his code, while still allowing some very high-level constructs to be crafted, extending the language. All of that, still being compatible with C, a.k.a. "the cross-platform assembly language".

Problem is, that to achieve all those three goals together, C++ has an enormous complexity behind it. Books and books were devoted only to tell what NOT to do with C++. Complex static code analyzers has been written to enforce those rules. Where I'm currently working we have more than fifty worldwide coding rules, that can be enriched by other studiowise and projectwise ones, enforced via code reviews and a custom configured static checkers (PC-Lint and Codewizard), and still I can't see them being applied consistently. Nor I've ever seen any source in the game/rendering realm strictly following all the well-known C++ best practices. It's crazy!

Even when you declare a class, there are many things you have to do before actually starting to write any code. The destructor has to be virtual or not? Protected or public? Do you want need to hide copy and assignment? Remeber to mark your single-parameter constructors as explicit. Why? Why do I have to mark my single-parameters contructors as explicit? Why? Why? If I want to declare a conversion operator, I do it, why do I have to use "explicit" to force a behaviour that should be the default one? I'll forget to do it! C++ is too hard for me, there's too much noise...

Bjarne Stroustrup said in an interview: "...Many minor and not-so-minor problems we have discovered over the years cannot be addressed in C++ for compatibility reasons. For example, the declarator syntax is an unnecessary complication—just about any linear notation would be better. Similarly, many defaults are wrong: constructors should not be conversions by default, names should not by default be accessible from other source files, and so on. Not controlling the linker has been a constant source of problems; in particular, implementers seem to delight in providing similar features in incompatible forms..."

Most of those not-so-minor problems are really driving me mad, on some days.

P.S. It could be a very nice idea to provide to your team some templates for the definition of common types, i.e. value types, pure interfaces, static classes, templates, base classes, etc...

P.P.S. Why C++ is considered immutable? Why couldn't we deprecate some old/bad language features? Java does this. You could emit a "deprecated" warning for such features, so people that are not dealing with huge legacy code bases can fix those warnings and turn them to errors, instead of having to rely on external static code checking tools to generate the very same results (because in the end, most of the stuff we enforce with static code checkers are really bad uses of the language that should be forbidden by the language itself)

21 March, 2008

No direction home.

I've noticed a couple of things about coding. One is about drunken coding, but it's not the topic of this post. The other one is about finding solutions.
I'm always surprised that most of the times when investigating bugs, or when trying to optimize a piece of code, or in other words, to find a new solution for a given code snippet, I tend to spend a lot of time at work with tools, iterating between the code, the debugger, Pix, and stuff like that. I probe the code, I dissect it and gather a huge amount of info.
But many times I don't find the solutions for those problems at work but on my way back home. At the same time I exit the company I start thinking and after a few minutes I find something to experiment with the following day.
Are those tools, fast iteration, the wealth of possibilities we have to operate on the code, reducing our thinking time? Many times it seems easier to tweak the code and find what happens, instead of pondering about the problem.
Is that something that happens only to me? Probably this is why I love minimal IDEs and languages for my home projects.

18 March, 2008

Four dimensions and more.

Are you doing you maths right? Are you tired of three dimensions? Linear Algebra is too way easy? Well, Geometric Algebra is more spicy. Jaap Suter tutorial is a great read. Geometric Algebra is a unification of mhm almost everything, vectors, lines, planes, quaternion, complex numbers, the awful plucker stuff, to any dimension.

It's nice to be able of reasoning in arbitrary dimensions. Now what we need is a killer application. Latest game programming gems (6) has an interesting article about using projective algebra to do enhance robustness of geometrical algorithms. But there aren't many other examples of uses of that extra coordinate we all get from memory alignment anyway. It's a shame, cool things could be done...

17 March, 2008

More on Raytracing

UPDATE: after a lot of discussions, I've moved a part of my previous post in this new one, and rewrote it to be more clear.

Recently Carmack talked about raytracing as well, but I haven't found that discussion to be any intresting. Everyone talks about raytracing, but noone really persuaded me with their way of looking at it. So that's my point of view:

Raytracing is more decoupled from geometry. But it is false that is has a lower algorithmic complexity. It's easy to see how raytracing vs rasterization ends up only in navigating the screen-object database in a different order. And this still holds true if we use spatial hiearchies. With a KD-Tree for example raytracing, for every pixel traces a ray through the tree, so it's numPixels*log(numObjects). Rasterization can, for each bounding box in the KD-Tree, draw it and see if it's visible against the Z-buffer or not, if it is, it can recurse. If the KD-Tree fills all the screen then it will draw numPixels pixels at each KD-Tree level, so it's still numPixels*log(numObjects). That's all.

Still in practical implementations raytracing is better with that, implementing such a system with a raytracer is way easier and it's way easier to be cache-coherent. The visibility query time does not depend very much on the depth complexity of the scene along that direction. Problem is that this is achieved with data structures that are not suited for dynamic objects.

Raytracing permits to perform visibility queries from arbitrary directions. This allows a great flexibility in lighting and shading algorithms. Rasterization is limited to coherent queries on the image plane. Not being able to choose arbitrary directions limits the kind of shading you can do (i.e. no curved reflections) and limits the kind of importance sampling you can do! Raytracing directly samples the rendering equation integral, and everything into it translates into a nice, uniform importance sampling problem. This has a huge advantage, because the complexity of the computation is not the sum of the complexity of each subsystem, but sampling decisions can be made across them. For example, if you have a thousands of lights and motionblur, you can distribute samples that choose different light influence and different times in a coupled way. And you can do importance sampling. Try rendering a thousands of lights with shadowmaps!

Realtime raytracing needs coherency. Problem is that those arbitrary queries tend to be very cache unfriendly, and even if we do have some memory coherent data structures, the more you have scattered visibility queries the less efficient your raytracer will be. So we have more flexibility, but to be fast, we have currently to restrict ourselves to rasterization-like queries! So there's no point in using those kinds of realtime raytracing, if we eventually fall back in the situration that is mostly appealing to rasterizers. We don't have yet enough power to simulate the kind of effects raytracing is very good at. We barely have it for offline rendering as of now, the way to realtime is still very long!

Rasterization is very cache friendly because we traverse the geometry, after culling, in a linear fashion, we write pixels in a random order, but usually there's a lot of locality anyway. We only have to keep in memory the framebuffer (and not all of it, with predicated tiling) and then we can stream data into it as we process our geometry. It's very nice, it's very efficient, problem is, we are restricted to a given kind of visibility queries, and there's no way of escaping that.

Scattering or Gathering? I don't see a clear answer. Raytracing has a lot of potential, and it's a more powerful algorithm, directly linked to the solution of the rendering integral. But as of now, even offline renderers do not harness its true power (even if things are going towards the right way). Realtime raytracing is slow, it's limited to rasterization-like effects, has no general solution for dynamic scenes and does not integrate well with shaders too (why? because if you have programmable shaders then you can't shoot secodary rays without evaluating them, and ray shooting can be arbitrary and depend on other rays, so it's hard to generate the large number of rays that are needed for memory coherent raytracing to work well).

Probably some more investigation has to be made in a more unified framework, raytracing is a gathering operation, rasterization is a scattering operation. There are a few attemps of mixing the two, it's very hard (that's why REYES does not mix well with Raytracing), and most hybrid schemes just use rasterization to shoot a large number of coherent rays on a plane (see this that is a nice example of the more general idea of Keller's raybundle tracing) but there's nothing out there now that can be seen as a solution. We'll see.

16 March, 2008

Craft your programming language

Do you think that you have the perfect mix of Standard ML, Lisp, HLSL, APL and Erlang but writing a compiler seems too hard (and you're right, it's not an easy task)? Fear no more.

LLVM is a mature system for crafting compilers and virtual machines. Check out this nice tutorial.

If you're not comforable with the idea of hand crafting the parser as well, you could just use one of the miryads of parser generators out there (I would recommend ANTLR)

14 March, 2008

Realtime radiosity?

http://www.geomerics.com/products.htm

Nice. But how?

A really smart coworker of mine noticed that some pillars in the demo where being lighted as a whole, like with a per-object coefficient, as the light changed. Plus, reading their technology page and how the system integrates with direct lighting... Mhm.

They seem to precompute visibility at given points around the scene (linked to objects? to vertices? to uniquely mapped textures? probably the latter is true). Then they take all the direct illumination plus shadows from a big lightmap of the entire level (how do they get an unwrap of the whole scene? is that computed dynamically based on the objects you see? like packing lightmaps of various visible objects into a big one? dunno, but you have to bake your direct illumination, even if it's dynamic into lightmaps, it seems for this to work). So probably, they're computing the amout of lighting for each sample point by gathering visible light texels from the lightmaps (so are they expressing visibility directly in the lightmap uv space? are they limited only to one bounce? probably).

I'm just guessing. I could be completelly wrong. But still I'm curious. If they're doing something like that, do they have a lod on the number of sample points? How much memory does each point require to store the precomputed visibility? Are they able to gather more than a single bounce? As they talk of precomputed visibility, surely they're not going to handle dynamic worlds... Still it's kinda nice.

Shame that you have to bake your dynamic lighting into lightmaps, that sounds expensive, even because you can't limit that to visibile surfaces (as the ATI skin subsurface scattering demo does for example, by a neat use of the early-z rejection) but you have to do that for each object/surface that could influence the scene lighting...

development stories

Me: "help, mouse does not work on my ps3 devkit"
Other: "do you have a keyboard plugged in?"
Me: "yes"
Other: "tried with another mouse"
Me: "yessir"
Other: "is your kit flashed with the latest update?"
Me: "all version correct"
Other: "do you have the second keyboard plugged in?"
Me: "!?!?"

P.S. even with the second keyboard it did not work, I had to reboot randomly swapping usb ports, now it works but the GPU debugger seems to crash. It's not funny. In the end, I gave up and debugged the shader blindly, by just iterating edit-compile-run cycles a few times.

10 March, 2008

Raytracing vs Rasterization

Is a really hot topic nowdays. Intel has been pushing realtime raytracing a lot lately, and this has spawned a number of discussions both in the realtime raytracing community and in the rasterization rendering one.

For now, I'll just link a post from Tom Forsyth on the topic. His summary is nice, but I wanted to point out a couple of things:

  • He says that rasterization has a simple way to discard stuff in its inner loop, i.e. pixels. That's only partially true. It's simple when you don't deal with depth, i.e. discarding stuff to avoid overdraw. If you have to account for depth, then you need a zbuffer, a zbuffer pyramid, and checking fragments against the pyramid, the rasterizer itself becomes more complex etc. It's really not easy and the GPUs are really optimized nowdays to do that, as overdraw was becoming a major problem. If you account all those techniques that are needed to properly cull triangles in a rasterization renderer, a KD-Tree raytracer does not seem to be any harder. In my opinion, usually, raytracers tend to be easier to write than rasterizers.
  • REYES is used only by a couple of offline renderers. Most notably, by Photorealisic Renderman. PRMan has a huge success, but it's only one product, and I doubt that its success is any more linked to REYES. I guess that it's so used in high end productions due to its proven stability, robustness and scalability in first place. And both PRMan and Mental Ray are hybrid, they use some form of rasterization only for the "first hit" (first visible surfaces) and then they raytrace the other hits. Rasterizing the first hit is nice not only for the speed, but also to compute antialiasing, gradients etc...
  • When complex stuff is thrown into, rasterizer complexity tend to grow faster than raytracers. I.E. reflections, shadows etc. That could be a nice reason alone to experiment with RTRT in nextgen GPUs. Raytracing is kinda suited to parallel execution, expecially modern memory-coherent ones. Dunno if it is THE future, but I won't discard it so easily.
UPDATE: I've moved the second part of this post in a newer one

03 March, 2008

How to properly LOD pixel shaders?

I've said in a previous post that pixel shaders have automatic LOD, in the sense that far away/small objects usually fill a small part of the screen, so there is an implicit LOD, done by the perspective projection. And this is true.
The problem is that small, far away objects, due to the same perspective projection, tend to be the vast majority of the objects we draw. If we have many different shading techniques, this also means that they cause the vast majority of state changes - pipeline stalls. And that's bad.
How to solve that? Replace shading of far away objects with a "maximum common denominator" technique, so all those objects can be grouped togheter and do not issue state/shader changes.

01 March, 2008

Next-gen, realism and detail.

Is next-gen about realism? In most cases, yes it is. Is realism about detail? In my opinion, no. Or at least, not about fine-scale details. It's strange but I see in many games that to be "nextgen" a lot of effort has been placed on such details, completelly overlooking other, more important things. Nailing the exact shape of grass blades is completelly irrelevant, if you don't nail the exact colors of the grass.

Do this test. Take a good photo and blur it. It's still good. You lost details, but the general apparence is the same (*). Do the same with games, and see how many pass this test. Assassin Creed? Gears of War? What about Gran Turismo HD? Crysis? Nba Street Homecourt? Half Life 2? Some titles have a quality that trascend fine detail, some others are so detailed that they are almost noise, when you filter them, you'll see how flat they really are.

Again, painter's technique is a great ispiration. Nail the general volumes, colors, apparence first. Then add details. Gran Turismo HD/5 has a lot of errors in the details. Missing transitions, obviously tiled textures, you can even see trees that are still made with two textured, alpha-keyed planes. But it is incredibly realistic. And the previous titles, on the Playstation 2, had the same extraordinary level of quality.

Color, shadows, light, those are the details that matter, and they are hard to get right too. Gather referece, use referece a LOT.

In-game visual quality checking systems also help. It's really nice to have the ability to load and display reference side-by-side with the rendered frame for example. Or to be able to turn off textures, and check only the lighting, diffuse only or with specular, with specularmaps, with normalmaps etc. A surface blurring shader could be nice too, maybe just biasing the mipmap lookups to get a very blurred mipmap level.

And this is not only true for realistic games. It's the key for achieving any kind of look.

(*) That's also why technical details in expensive photo equipment matter only for huge prints, mostly in commercial photography, and in most cases are useless, the only thing that matter is your view, the situation, and how comfortable you are with your equipment (tools!).