Search this blog

14 August, 2008

Test-Driven-Development

  1. Test
  2. If it didn't compile, add some keywords
  3. Goto 1
This is what test driven development usually is in games. It's not that bad, we do (or should) prefer iteration and experimentation over any form of design. Yes, I know that unit tests are very useful for refactoring, and thus simplify some sorts of iteration, but still, it's not enough.
This doesn't mean that automated testing is not important, quite the contrary, you should have plenty of scripts to automate the game and gather statistics. But unit tests are good only for some shared libraries, I don't think they will be ever successful in this field.

I'm going to leave for Italy, dunno if I'll have time to post other articles, there's stuff from Siggraph that is worth posting, I have a nice code optimization tutorial to post, the "normals without normals" technique plus a few other code snippets. Probably those things have to wait until mid-September, when I'll be back from holidays...

12 August, 2008

Ribbons are the new cubes!

Are you making a demo? Don't forget your splines, they are the cool thing now...
Lifeforce
Nematomorpha
The Seeker
Atrium
Scarecrow
Invoke

The "progressively appearing" geometry trick is also commonly used to draw them:
Route 1066
Falling down
Media error
Tactical battle loop

Cubes seems to be cool only if you instance a crazy number of them now:
Debris
Momentum

Another cool trend: 2d metaballs
Nucleophile
Incognito (near the end, this one features ribbons too)

But plain old spheres are not forgotten too
Kindercrasher

Plain old particle systems are out...

10 August, 2008

Small update

I've finished reading the Larrabee paper, linked in the realtimerendering blog. Very nice, intresting in general, even if you're not doing rendering... And it has a few very nice references too.

It seems that my old Pentium UV pipes cycle-counting abilities will be useful again... yeah!

I'm wondering how it can succeed commercially... It's so different from a GPU that it will require a custom rendering path in you application to be used properly, wonder how many will do that as nothing that you can do on Larrabee is replicable on other GPUs... Maybe, if its price is in the rage of standard GPUs and its speed with DirectX (or a similar API) is comparable... or if they manage to include it in a consolle. Anyway, it's exciting, and a little bit scarying too. We'll see.

I've also found a nice, old article about Xenos (the 360 Gpu) that could be an intresting read if you don't have access to the 360 sdk.

Warning: another anti-C++ rant follows (I've warned you, don't complain if you don't like what you read or if you find it boring...)

Last but not least, I've been watching to a nice presentation by Stroustrup that he gave at university of Waterloo, on C++0x, it's not new but it's very intresting. It shows again how C++ is at an evolutionary end.

Key things you'll learn from it: C++ design process is incredibly slow and constrained, C++ won't ever deprecate features so it might only grow (even if Bjarne would like to do so, but he says that he was unable to convince the compiler vendors...), not change. That means that all the problems and restrictions imposed by the C compatibility and by straight errors in the first version of the language won't be addressed. That also means that C++ is almost at its end, as it's already enormous and it can't shrink, and there is a limit to the number of things a programmer can know about any language. C++ is already so complicated that some university professors use its function resolving rules as "triky" questions during exams...

You will also hear the word "performance" each minute or so. We can't do that because we care about performances, we are not stupid windows programmers! Well, Bjarne, if going "low level" means caring about performances, then why aren't we all using assembly? Maybe because writing programs in assembly was so painful that not only become impractical, but was also hampering performances, as it was hard enough to write a working program, let's not talk about profiling and optimizing it... Try today to write a complete program in assembly that's faster than the same written in C (on a modern out-of-core processor I mean, of course on C64 assembly is still a nice choice)... So the equation higher level languages == less performance is very simple and very wrong in my opinion, and we have historical proofs of that. C++ is dead, it's only the funeral that's long and painful (especially when incredilink takes five minute to link our pretty-optimized-for-build-times solution).

I can give C++ a point for supporting all the design-wise optimizations pretty well (i.e. mature optimizations, the ones you have to do early on, that are really the only ones that matter, for function level optimizations you could well use assembly in a few places, if you have the time, that is something that's more likely to happen in a language that does not waste all of it in the compile/link cycle), while other languages still don't allow some of them (i.e. it's hard to predict memory locality in C#, and thus to optimize a design to be cache efficient, and there's no easy way to write custom memory managers to overcome that too).

Still C++ does not support them all, and that's why when performances really matter, we use compiler specific extensions to C++, i.e. alignment/packing & vector data types... The wikipedia C++0x page does not include the C99 restrict keyword as a feature of the language but I did not do any further research on that, I hope it's only a mistake of that article... Even the multithreading support they want to add seems to be pretty basic (even compared to existing and well supported extensions like OpenMP), quite disappointing for a language that's performance driven, even more considering that you'll probably get a stable and widespread implementation of it in ten years from now...

P.S. it's also nice to know that the standard commitee prefers library functions to language extensions, and prefers to build an extensible language over giving natively a specific functionality. Very nice! It would be even a nicer idea if C++ was not one of the messiest languages to extend... Anyone that had the priviledge of seeing a error message from a std container should agree with me. And that is only the standard library that's been made together with the language, it's not really an effort of a third party to extend it... Boost is, and it's nice, and it's also a clear proof that you have to be incredibly expert to make even a trivial extension and kinda expert to use and understand them after someone, more expert than you, have made one! Well I'll stop there, otherwise I'll turn this "small update" post into another "c++ is bad" one...

07 August, 2008

Commenting on graphical shader systems

This is a comment on this neat post by Christer Ericson (so you're supposed to follow that link before reading this). I've posted that comment on my blog because it lets me elaborate more on that, and also because I think the subject is important enough...

So basically what Christer says is that graphical (i.e. graph/node based) shader authoring systems are bad. Shaders are performance critical, should be authored by programmers. Also, it makes global shader changes way more difficult (i.e. remove this feature X from all the shaders... now it's impossible because each shader is a completely unrelated piece of code made with a graph).

He proposes an "ubershader" solution, a shader that has a lot of capabilities built into, that then gets automagically specialized into a number of trimmed down ones by tools (that remove any unused stuff from a given material instance)
I think he is very right, and I will push it further…

It is true that shaders are performance critical they are basically a tiny kernel in a huuuge loop, tiny optimizations make a big difference, especially if you manage to save registers!

The ubershader approach is nice, in my former company we did push it further, I made a parser that generated a 3dsmax material plugin (script) for each (annotated) .fx file, some components in the UI were true parameters, others were changing #defines, when the latter changed the shader had to be rebuit, everything was done directly in 3dsmax, and it worked really well.

To deal with incompatible switches, in my system I had shader annotations that could disable switches based on the status of other ones in the UI (and a lot of #error directives to be extra sure that the shader was not generated with mutually incompatible features). And it was really really easy, it's not a huge tool to make and maintain. I did support #defines of “bool”, “enum” and “float” type. The whole annotated .fx parser -> 3dsmax material GUI was something like 500 lines of maxscript code.

We didn't have just one ubershader made in this way, but a few ones, because it doesn't make sense to add too many features to just one shader when you're trying to simulate two completely different material categories... But this is not enough! First of all, optimizing every path is still too hard. Moreover, you don’t have control over the number of possible shaders in a scene.

Worse yet, you loose some information, i.e. let’s say that the artists are authoring everything well, caring about performance measures etc... In fact our internal artists were extremely good at this. But what if you wanted to change all the grass materials in all your game to use another technique?

You could not, because the materials are generic selections of switches, with no semantic! You could remove something from all the shaders, but it's difficult to replace some materials with another implementation, you could add some semantic information to your materials, but still you have no guarantees on the selection of the features that artists chosen to express a given instance of the grass, so it becomes problematic.

That’s why we intended to use that system only as a prototype, to let artists find the stuff they needed easily and then coalesce everything in a fixed set of shaders!
In my new company we are using a fixed sets of shaders, that are generated by programmers easily usually by including a few implementation files and setting some #defines, that is basically the very same idea minus the early-on rapid-prototyping capabilities.

I want to remark that the coders-do-the-shaders approach is not good only because performance matters. IT IS GOOD EVEN ON AN ART STANDPOINT. Artists and coders should COLLABORATE. They both have different views, and different ideas, only together they can find really great solutions to rendering problems.

Last but not least having black boxes to be connected encourages the use of a BRDF called "the-very-ignorant-pile-of-bad-hacks", that is an empirical BRDF made by a number of phong-ish lobes modulated by a number of fresnel-ish parameters that in the end produce a lot of computation, a huge number of parameters that drive artists crazy, and still can't be tuned to look really right...

The idea of having the coders do the code, wrap it in nice tools, and give tools to the artists is not only bad performance-wise, it’s bad engineering-wise (you most of the time spend more resources into making and maintaining those uber tools than the one you would spend by having a dedicated S.E. working closely with artists on shaders), and it’s bad art-wise (as connecting boxes has a very limited expressive power).