Search this blog

26 February, 2011

Surviving C++

WARNING: This post (as 99% of my posts regarding C++) contains suggestions but also scathing critiques to what it could be your beloved language
If you are offended by the former, please before rushing to post a comment, at least make sure you've read and understood the "Defective C++" section of this faq, that contains some of the most high-level remarks (i.e. without going down to many of the details, like bad defaults or awkward keywords) on what is bad about C++ design. 
I know it's hard to get out of our own comfort zone, and we all tend to get defensive about things we are used to, even if they really suck. If you want to argue and discuss on the merit, then I'm happy to chat either via comments or you can find an MSN widget somewhere on the right of this blog page. If you just want to write that I don't understand C++ or something like that, avoid, it's just a waste of time (and I would really love to check if you are a better C/C++ or any other language programmer than I am...)

Introduction
 
If you've been reading my blog for some time, you know what I think about C++. It's an horrible mess that managed to be successful exactly because it's lame. It was designed to be a glorified macro system on top of C in order to please the market of C programmers, not scare them with this new "OO" thing but just give them something they could relate to, that they could reason in terms of C. In retrospect, it was not even a good at achieving that (see D for example) but it was the first and in a few years most C programmers were "won" by C++ (not all) so now we have to live with it.

How do we survive it? How do we manage to create projects and sell software made with it? Well, the best answer so far has been to "subset" it. Basically every professional studio working in C++ (that is, in gaming, everyone) restricts C++ in a "somewhat safe" subset that is more manageable and less tricky to work with, to avoid at least the biggest pitfalls.

What does it take to be a C++ masters? Is it the knowledge of fancy design patterns (omg!) or expression templates? Of course not, these are things that all the kids talk about out! To be a C++ master is really about knowing C++ defects more than its features, as it's demonstrated by these three fundamental, cornerstone books: Effective C++, More Effective C++, C++ Coding Standards.

In fact, no one uses straight C++ to make anything, in real world everyone starts by sub-setting C++ into something "safer" and then adding back the fundamental missing features (i.e. memory management, serialization, reflection and so on) via libraries, macros or  other trickery (i.e. code to code transformers, code generators or parsers and so on), so really the C++ we use is not a standard, but a studio specific version of the language.

This "custom" C++ is then enforced either informally via code reviews and coding standards, or more strictly with the help of configurable "lint" software.

So one day I decided to stop using the blog only to blame C++ but actually try to help developers by seeing if people could agree on some "gaming/realtime rendering" C++ standards. With the help of a shared notepad, I seeded the guidelines with my personal considerations and waited for people to start adding their own.


I think we got something interesting there, and I'll clean it up and post a snapshot on the blog of that work. But the more I got into this the more I grew dissatisfied with the guidelines, as it seems to me most of them are "aesthetic" in some sense that they are certainly needed to reduce the WTF/minute ratio when dealing with a C++, but they don't change fundamentally the experience of developing a project.

What is fundamental, really, when dealing with a big software project? What is the first thing we need in order to survive?

Everything else does not matter (much)

Apart from running and producing an output, what is the most important quality that a software project has to have, from a development perspective? If you think about it it's obvious. The most important quality that we want is the ability to change it.
That is what we do all day, all days, we edit code. We don't just add functionality. We don't design a perfect, immutable system that will withstand centuries (especially not if you're making a game!).

Code changes, code rots, hardware changes, design changes, ideas change, players change, the single most important quality of code is its ability to be changed without breaking, without effort.

Also, really, if we can change code easily, we have all that we need. A junior programmer made a mess in our camera system near the deadline of our game? No problem, for the next title we can rewrite it! We have a crash that we don't have any clue about? Well, maybe we can rip out some systems, one by one, and find out the culprit. Designers changed their mind on the control system? Well, we might write some prototypes with them at their desk.

The more our language and project is easy to change, the less time and less obstacles it poses to change, the more all the other problems fade in the background. It's not a coincidence that the more a language is dynamic, the less need there is for fancy debugging tools. Who needs a watch window indeed, if I can just add on the fly a widget on screen that graphs the value of a given variable?

Unfortunately though, when you look at our beloved (not!) C++, there is little to be happy about. C++ as a medium for our art looks more like statuary marble than modeling clay: it requires a team of muscular stone cutters under the guidance of a genius in order to produce some amazing sculptures. And once a given idea has been translated into the stone, you hardy can change it without re-sculpting most of it or having to work in the constraints that the shape it currently has impose on you.

C++ is not only static typed but also statically linked and its performance depends on that. You don't get any fancy JIT or runtime optimization, but not even a standard way of loading modules or functions (even if to be fair, 99% of the systems you work with will have some non-standard functionality to achieve some sort of dynamic linking). Not that I'm advocating dynamic typing here, that's another can of worms (and you might even argue that types are not really the best form of static checking, and C++ classes are surely not the best expression of user defined types, I digress...), but static linking surely give us less options.

So what we can do? Well, the art of sculpting in C++, the art of design, becomes really the art of subdividing our sculpture in pieces, the real design challenge is all about how to cut or work into pieces. Too many of them and the sculpture will be a fragile and ugly mess (translation: craptastically slow OOP shit). Too little and we loose flexibility. 
Also, we would of course love if each piece was connected to as few other pieces as possible, otherwise replacing it would be a pain, and that the connections themselves to be as simple as possible.

How do we achieve that? Well, it can be tricky because dependencies are in many ways evil, but they are also the most innocent looking features of any languages, devils in disguise. Structured programming is all about composition (even disregarding the excesses of OOP)!

If you see these... think twice.

I originally thought this as some "coding flashcards" for a course at my former workplace but project deadlines had priority and I didn't end up finishing it. So when I started the collaborative guidelines experiment I wrote them in a similar "visual" style: if you see this code, then your spider-coding-senses should tingle. 

Great, experienced coders develop an instinct, an aesthetic if you want, something really subconscious about good structure and good looking code that is often difficult to formalize, is some sort of gut feeling that comes from having read a lot of code and knowing the perils of some structures.

So here it is. This is the part that really matters of these visual coding guidelines: the global-disaster causing wall of shame.

You see: Foo.h (header file)
You think:
  • Is everything in this header really meant to be seen outside?
    • Can we make more things part of the implementation?
    • I.E. Look for private functions that could be outside of the class.
  • Is everything in this header really meant to be seen by every other module depending on this one?
    • Can we more move things into a header that is seen by this module only (i.e. a good schema would be to have local includes in a module_name/source directory and the outside visible ones in a module_name/include/module_name one)
  • Should we consider using PIMPL (Façade if you like changing names of existing techniques)? Or abstract interfaces?
  • Are we exporting concrete types, or contracts and functions?
    • If we are exporting concrete types, are they fundamental enough? If we are exporting interfaces, do we really need them or are we over-abstracting, over-generalizing?
      • Remember: don't generalize something if you don't already have two/three different usages of a given thing. Don't abstract anything if you don't have two/three different dependencies on a given thing. Code should change to adapt to situations, not try to predict them.
You see: #include (especially in an header)
You think:
  • Do we really need the include or we can use forward declaration?
  • Can we include that file or are we introducing a dependency that we don’t want across two modules of our system? (even better, make sure that this can't happen by structuring your project in a way that to access a given module from another one, the project properties have to be changed, and make sure you have all these changes approved by a technical director)
    • Is the dependency static (function linking / inline functions / templates / types) or dynamic (interface calls / function pointers)?
    • Which one makes most sense here? Do we need performance or abstraction?
      • You will usually want to statically depend on core system libraries (memory, math...) and dynamically from other modules.
You see: a_type* Foo(...) or a_type& Foo(...) (interface returning a pointer, or returning a structure containing pointers)
You think: 
  • Who will own that memory? Who will destroy it? When?
    • What if we want to hot-swap resources?
    • Better use a handle? Reference counted?
  • Should we be allowed to mutate it (non-const)?
    • And who else will mutate it?
  • Is the pointer type the least derived (if it's part of a type hierarchy) possible?
You see: class Foo : public Bar
You think: 
  • Are we inheriting from an interface (pure abstract class)? Are we generalizing needlessly? Is the interface as small as possible? Then go ahead.
  • Otherwise, the class has to have some data associated with it. Are we sure we want to link Foo both to the interface of Bar and to its in memory layout? 
    • No, you are not. Don't do it, use composition instead. [More Effective C++, 33 Make non-leaf classes abstract]
You see: Class Foo{ void MyFunction(…) }
You think:
  • Can it be implemented as an external function instead? 
    • Especially if it's private, should it be visible to anyone that can see the header? Prefer implementation-only functions to private member functions.
  • Should MyFunction be const?
  • Has it really to be part of the class or should it be an external utility function?
    • Is it needed to make the class complete and minimal?
  • Should it be declared const?
You see: static
You think:
  • Can we safely have multiple instances of this static (i.e. if we link this statically to multiple dynamically-linked modules), or it will be a problem with DLLs?
  • Can we safely access this static from multiple threads, will that be needed?
Conclusion
This is it for now. Look at the typewithme pad above to see the full coding guidelines. I'm sure I forgot other things and I will probably edit this post over and over. If you want, comment as usual or better, join the discussion on the typewithme shared document!

p.s. There is of course quite some discussion in the comments and actually something persuaded me that I should clarify here. A point someone raised is basically: "you seem to hate C++ but still in the end you say things about decoupling that will apply to many other languages (to some degree - I would add). 

It is 100% true and right. That is what I felt and what I probably failed to clearly express. I started this "collaborative guidelines" experiment, I got some great input there and it was all nice and exciting. But then I started thinking that yes, these are the landmines we all know and have to avoid, there are books written on them and everyone really agrees. 

Yes, there might be people arguing on the details (Boost is good or evill? Design patterns? and so on), surely everyone comes up with their own implementation of the same extensions (reference counting, serialization and on and on) and that sucks but still the bottom line is, everyone knows that the default new/delete operators are useless, everyone knows that "friend" is bad and so on. 
There are things to avoid always, things to mostly avoid and things to remember because of bad defaults (i.e. "explicit" or "virtual ~Foo", redefining equality operators or hiding them and so on) and probably even many things that everyone forgets until they hit them, but at the end of the day... they don't matter much.

In all my years as a professional programmer I've learned that. You can work with the worst code and language and tools on earth, if the problems are "local", they are hidden behind well designed abstractions, then all the implementation behind them does not really matter. Or it matters, yes, but at a totally different order of magnitude.


So yes, I still hate C++. But we can work with it. If we pay attention to the only thing that really matters, the decoupling, then life will be fine. And slowly, we will even be able to move stuff over other languages, if it's all nicely decouple and the interfaces are sane. And believe it or not, it's already happening.


Now go, look at your project and try to do an experiment. Take a small piece that has defined inputs and outputs, with some coworkers we tried once with our camera library. Can you take it apart and put it in a DLL? Or rewrite it in another language? If yes, then sweet dreams, your project is nice. If not, then you should think, maybe you have a problem...

08 February, 2011

Gamma and diffuse shading

Remember than (tex^a * shading)^b = tex * shading^b if a=1/b... (cheap diffuse-only stuff, i.e. particles, vegetation... of course this assuming software gamma in the shader, that will give you no gamma-correct blending)

22 January, 2011

Mythbusting: deferred rendering

Today I want to write a bit about deferred rendering and its myths. Let's go...

1) Is deferred good?

Yes, deferred is great. Indeed, you should always think about it. If for "deferred" we mean doing the right computations in the right space... You see, deferred shading is "just" an application of a very general "technique". We routinely take these kind of decisions, and we should always be aware of all our options. 

Do we do a separable, two-pass blur or a single pass one? Do we compute shadows on the objects or splat them in screen-space? What do I pass through vertices, and what through textures?

We always choose where to split our computation in multiple passes, and in which space to express the computation and its input parameters. That is fundamental!

Deferred shading is just the application of this technique to a specific problem: what we do if we have many analytic, lights in a dynamic scene? With traditional "forward" rendering the lights are constant inputs to the material shader, and that creates a problem when you don't know which lights will land on which shader. You have to start to create permutations, generate the same shader with support of different number of lights, then at runtime see how many lights influence a given object and assign the right shader variant... All this can be complicated, so people started thinking that maybe having lights as shader constants was not really the best solution.

Bear with me. Let's say that you have a forward renderer that does assign lights to objects, it's working but you're fed of it. You might start noticing that it works better if the objects are not huge and you can cap the maximum number of lights per object. In theory, the finer you can split your objects the best, you don't have too many lights overlapping a given pixel, maybe 3/4 maximum, but when the objects are large compared to the lights area of influence things start to be painful.
What would you start thinking? Wouldn't it be natural to think that maybe you can write the indices of the lights somewhere else, not in the pixel shaders constants? Well, you might think to write some indices to your lights in the pixels... here it comes the Light-Indexed Deferred Rendering.

Let's say on the other hand that in your forward renderer you really hated to create multiple shaders to support different numbers of lights per object. So you went all multipass instead. First you render all your objects with ambient lighting only, then for each extra light you render the object with additive blending feeding as input that light.
It works fine but each and every time you're executing the vertex shader again, and computing the texture blending to multiply your light with the albedo. As you add more textures, things really become slow. So what? Well, maybe you could write the albedo out to a buffer and avoid computing it so many times. Hey! Maybe I could write all the material attributes out, normals and specular. Cool. But now really I don't need the original geometry at all, I can use the depth buffer to get the position I'm shading, and draw light volumes instead. Here it comes, the standard deferred rendering approach!

So yes, you should think deferred. And make your own version, to suit your needs!

p.s. it would have been better probably to call deferred shading "image-space shading". It's all about the space the computations happen into. What will we call our rendering the day we combine virtual texture mapping (clipmaps, megatextures or what you wanna call them) with the idea of baking shading in UV space? Surface caching, I see. Well it's ok, nowadays people call globals "singletons" and pure evil "design patterns", you always need to come up with cool names.

2) Deferred is the only way to deal with many lights.
Well if you've read what I wrote above you already know the answer. No :)

Actually I'll go further than that and say that nowadays that the technique has "cooled down" there is no reason for anyone to be implementing pure deferred renderers. And if you're doing deferred chances are that you have a multipass forward technique as well, just to handle alpha. Isn't that foolish? You should at the very least leverage it on objects that are hit by a single light!

And depending on your game multipass on everything can be an option, or generating all the shader permutations, or doing a hybrid of the two, or of the three (with deferred thrown in too). Or you might want to defer only some attributes and not others, work in different spaces...

But it's not only about this kind of optimizations. You should really look at your game art and understand what its the best space to represent your lights. The latest game I shipped defers only SSAO and shadows, and I experimented with many different forms of lighting, from using analytic to spherical bases to cubemaps. And it ends up using a mix of everything...

3) Deferred is an alternative to light maps.
Not really.

Many artists came to me thinking that deferred is the future because it allows you to place lights in realtime. Well I'm telling you, with CUDA, distributed processing and so forth, if your lightmap generation is taking ages and your artists can't have decent feedback, it's a problem of your tools, not really of the lightmap technique per se.

Also deferred handles a good number of lights only if they are unshadowed point lights. So either you get a game that looks like some bad phong-shiny 3d studio 4 dos, rendering or your artists have to start placing a lot of lights with "cookies" to fake GI (and back to the 90ies we are). 

Or we will need better realtime GI alternatives... like computing light maps in realtime... Still, they are not bad, for static scenes or the static part of your scene light maps still make a lot of sense and they will always do as they are an expression of another of these general techniques: precomputation a.k.a. trading space for performance.

4) Deferred shading is slower than deferred lighting (a.k.a. light prepass).
Depends. The usual argument here is that deferred shading requires more memory than deferred lighting, thus more bandwidth, thus it's slower because deferred is bandwidth limited. It turns out that's a mix of false statements with some questionable ones.

First, it's false that deferred shading uses considerably more memory. Without going into too many details, usually deferred shading engines use four RGBA8 rendertargets plus the depth, plus a rendertarget to store the final shaded stuff. Deferred lighting needs at least a RGBA8 (normals and specular exponent) plus depth for lighting. Lighting needs to be stored into two RGBA8 to have comparable quality (some squeeze this into a single RGBA8, some use two RGBA16, it really depends on what's the dynamic range of your lights), plus you need the buffer for the final rendered stuff. So it's basically 6 32bit buffers versus 5, not such a huge difference.

Second, more memory does not imply more bandwidth. And here is where things start to be variable. Both methods are basically overdraw free in the "attribute writing" passes, as you can sort your objects front to back and use a bit of z-prepass (or even a full one if you want to compute SSAO or shadows in screenspace at that point). The geometry pass in the deferred lighting is also basically overdraw free (as the hi-z has been already primed). So what it really matters is the lighting pass. Now if you have a lot of overdraw there, you are in danger. You can decide if to be bottlenecked in the blend stage (i.e. on ps3 with deferred lighting) or in the texture one (deferred shading). Or you can decide to do the right thing and do the "tiled" variant of deferred rendering.

Last but not least, deferred might not be actually be bandwidth limited. I've seen more than one engine where things were actually ALU bound. And I've seen more than one engine struggling with vertex shading, thus being limited by the two geometrical passes deferred lighting has. 
And I'm not alone, in the end it really depends on your scene, on your platform and your bottlenecks. Deferred lighting is an useful technique but it's not a clear winner over deferred shading or any other technique.

5) Deferred lighting can express better materials than deferred shading.
Nah, not really. It's true that you have a second geometry pass where you can pass per vertex attributes and do some magic, but it turns out that the amount of magic you can perform with the lights already computed and fixed to the phong model is really little. Really little.

Also consider that deferred lighting works with a fundamental flaw, that is blending together specular contributions. It also in many implementations allow only for monochromatic "specular light".

Now there are some lighting hacks that work better with deferred lighting and some others that work better with deferred shading but in general both techniques decouple lights from objects "too late". They do it at the material parameter level, that's to say deep into the BRDF. In the end all your materials will use the very same shading model, minus some functions applied to it via lookup tables.

To the opposite end it the light-indexed technique, that decouples lighting from materials as "early" as possible, that's to say at the light attribute fetching stage. Can something be in a middle ground between the two? Maybe we could encode the lights in a way that still allows BRDF processing without needing to fetch the single analytical light attributes and integrate them one at a time? Maybe we could store the radiance instead of the irradiance... Maybe in a SH? I've heard a few discussions over this and it's in general impractical but recently Crytek managed to do something related to this in the cryEngine 2 to express anisotropic materials.

6) Deferred lighting works better with MSAA.
Yes. Sort-of.

If you don't write per sample attributes, all deferred techniques really do not work with MSAA if you don't use some care when fetching the screen space attributes, that might be a bilateral filter, the "inferred lighting" discontinuity filter or other solutions. This ends up to be the "preferred way" of doing MSAA and it's applicable to everything. And many nowadays do not MSAA at all and do a postfilter, like MLAA, instead.

Even if you just do shadows in screen space, as for example the first Crysis does, you will end up with aliasing if you don't filter (and crysis does, but the shadow discontinuities are not everywhere in the scene so it's ok-ish).

Now with DirectX 10.1 or with some advanced trickery on previous APIs you can read individual MSAA samples and decide where to compute shading per sample (instead of per pixel). That means that you will need to store all the samples in memory and keep them around for the lighting stage, as these attributes are not colors, can't be blended together in a meaningful way.

This enables you to compute and read per-sample attributes at discontinuities, and this is where deferred lighting has an advantage, as the attributes that go into the lighting stage are packed in just a single buffer so storing them per-sample requires the same amount of memory as your final buffer (and in fact it can be shared with your final buffer, as you won't need these after the lighting stage), and the lighting buffer can be MSAA-resolved as lighting will blend properly.

Doing the same with deferred shading would be a bit crazy, as you would need to store per-sample attributes of four buffers. It is possible even if really not too good to do a "manual" MSAA-resolve on the G-Buffer (thus not keeping all the samples for the lighting stage) where you do standard MSAA averaging for the albedo and use the "nearest" (to the camera) sample for the rest and it somewhat works.

Update: What they don't want you to know! :)
To close this article, I'll put here some tips and less talked about things about deferred techniques. One advantage of both deferred lighting and shading is that you get cheap decals (both "volumetric" and "standard" ones) as you don't have to compute lighting multiple times, the decals lie on a surface, so you can fetch the lighting you've already computed for it.
That of course means that the decal will need to change the surface's normals if needed blending its own, so you don't really get two separate and separately lit layers but it's still a great way to add local detail without multitexturing the whole surface...
If you think about it a second, it's the very same advantage you have with lights, you don't need to assign them per mesh thus potentially wasting computations on parts of the mesh that are not lit by a given light or need to split the mesh in complicated ways.

Also, it is neat to be able to inject some one-pixel "probes" in the gbuffer here and there, and have lighting computed on them "for free" (well... cache trashing and other penalties aside) for particles, transparencies and other effects, i.e. see the work Saints Row 3 did.

Another advantage that is especially relevant today (with tessellation...), is that you can somehow lessen the problem of non full pixel quads generated by small triangles (and the absence of quad fusion on GPUs). This is especially true of deferred shading, as it employs a single gbuffer pass that is taxing on bandwidth but less on processing (or... it could be that way). In deferred lighting you can achieve a nearly perfect culling (i.e. with occlusion queries generated during the first gbuffer pass) and zero overdraw in the second geometry pass, but you still have the quad problem...

Which unfortunately, will also affect what AMD calls "Forward plus" (and should be called light indexed deferred instead, as that's what it is... there is still research to be done on that and how to compress light information per tile, avoiding to have to store a variable light list and so on... AMD did not much there really).

Linux is not for nerds anymore?

I've always, always hated Linux. I've never even really loved too much the opensource movement, I really like the idea but in practice most projects end up being tech-oriented piles of "features" with no clear design or user target in mind. It's somewhat ironic, even if totally reasonable, that most of the really usable, great opensource projects are usually derived or inspired by commercial applications (i.e. netscape - mozilla - firefox) or backed by big corporations.

But still I've always been interested in it and how can you not be... An OS that you can hack freely, it's a sexy idea for every programmer.

So time to time I give linux a chance, every couple of years let's say, only to be frustrated by it. 

At the beginning you needed to recompile the kernel for almost everything. Then it came the time where you could not have a driver for almost any non-obsolete external periphery. Then it was the KDE vs Gnome era, both were ugly and slow and crashed every few minutes so it was really a tough choice between the two. For a while I was into the live-CD craze, with Knoppix and all the other distributions. They were never really useful to me but I still managed to burn dozens of DVDs with them (we're talking of times before cheap and big USB keys... where people burned CDs!), and it was a way to keep an eye on the progresses made...

And then it came... Ubuntu. A serious project, with money behind it and a focus. And everything apparently changed.

I'm seriously becoming to think that Linux made it. This might not be a surprise for all of you that work with VIM and love GCC and so forth, but for me and I bet many others, it's news.

I've installed Jolicloud, a netbook, webapp centric ubuntu spinoff on my two netbooks (an ancient EEEPC 901 and a HP Mini 1000) and I'm impressed. My girlfriend is currently away for work and she brought with her the EEEPC. She is not a computer geek (at all) and she is even more conservative than me when it comes to changing user interfaces and trying to not be annoyed by technology, and she is liking Linux...

I didn't have to download any driver. I didn't have to download applications from websites. I have a centralized point for the updates. I have a great interface, with great font rendering. Performance is great. It does not crash.

To be totally honest, I've found a small glitch. The USB installer provided through the website did not work, and the more updated one stored on the ISO... was on the ISO. Just plain dumb. But after extracting it, everything went smoothly.

It's the closest thing to having a Mac without paying for it! Some friend of mine also suggested to have a look at MeeGo, it looks cool from the screenshots but I didn't try it yet. Already the idea of having too much choice between distributions is starting to worry me :) 

I even tried Ubuntu also on an oldish Acer but it didn't work too great and I've uninstalled it. As you might have understood by the tone of this post I'm really not into fiddling with the machine, so I didn't try to benchmark the system or fix it, it might be something specifically wrong with that machine or just that the default Ubuntu is too heavy for it.

A key to the Jolicloud success is also that it specifically supports a given hardware (netbooks, and it has a long list of compatible ones) so it can be successfully optimized for that target.

Could it be that an OS built on a mediocre kernel and outdated, server-centric concepts, evolved so much to really be a viable alternative for the consumers? And not as an embedded thing, but on the PC hardware, competing with Windows? Incredible, but true.

I'm a believer. Now if only they had a decent IDE... Or if photoshop ran on it...

14 January, 2011

Poll results - 2010 most envied rendering

Update: Results
Unsurprisingly the game that got most of the "best graphics" awards also won our poll. God of War III had 28% of the votes. I don't have a PS3 (couldn't give money to sony after they gave me such a bad SDK... and with that ugly design... now I actually like it, I'm considering to buy one) so I could not play it too much, but I see its greatness.

Frostbite comes second with Battlefield Bad Company 2, 15%

The surprisingly there is a tie between Mass Effect 2 and Red Dead Redemption. I say surprisingly because in my opinion Red Dead is one of the best looking games ever. I finished it mostly because I enjoyed so much the vistas. Mass Effect 2 is an awesome game, but graphically it's pretty standard Unreal Engine stuff (but very well used, especially in the cutscenes... proving also that dropping frames is not that important during cutscenes if your visuals are nice).

Among the iOS devices Carmak clearly wins over Sweeney (3% vs 1%)

All the other games follow with just a few votes each, I didn't expect Kirby's Epic Yarn to be more popular than Black Ops among developers (6% vs 4%), Starcraft 2 deserves its 7% and I'm also happy to see that it's not only me to think that NBA2K does not look like such a great achievement graphically (only one person voted for it).

---

Originally I had in mind another poll for this most. Something more technical, about programming languages... But a year just ended, all the gaming websites are running best-of 2010 specials, and I've been tempted to do something similar. Also this year something happened while I was playing a game...

For me games and programming have been pretty much part of the same experience. I started on a commodore 64 that belonged to some cousins of mine, playing games. I was a kid, around seven-eight, and my mother didn't like the idea of me playing videogames too much so she asked one of my bigger cousins to teach me programming... 

Now even if I've been doing those two things for a while now, on a professional level they are very distinct. I'm a rendering programmer, and I never really cared too much about the game I was making, other than trying to achieve the best visuals possible. 

That's not to say that I do that on purpose, it just happened that I've always worked on game genres that were not what I like to play, and that was never a problem for me. In general I don't care too much about the overall product, I would try to help other game areas for sure but it's not quite the same thing. 

And I always thought I would be fine even working on a mediocre game in terms of gameplay if it was striving to be the best it could in terms of graphics. And even when I look for a job I rarely care if a given company makes games that I enjoy or not... 

...That at least until this year I played Modern Warfare 2. I have to say, this game hooked me so much on its single player campain, that after finishing it I went straight to check if Infinity Ward had some suitable openings. It's just that amazing.


So my 2010 poll is the following: which game (among the ones shipped in 2010) you'd like to be part of (rendering development wise!)?

As always, I'll post the results on the blog. You can vote using the widget on the right of this blog page.