Search this blog

05 February, 2010

The pitfalls of experience

Ours is an industry that, even being very young, treasures experience. And it's not hard to understand why. An AAA game is a product that (hopefully) will be sold to millions of users, eager to find faults in your work. 
Usually your project will run from one to three years, from the beginning to the final shipped version with deadlines that most often than not, simply can't slip. In those years, you're going to face all kinds of engineering problems, from writing complicated technologies that have to work in realtime, to resolving intricate software and hardware problems, to managing team dynamics. And everything has to be done under ever changing requirements and designs.

If you add on top of that, that most of the times you're going to work in a framework that effectively discourages experimentation, with punishing iteration times and complicated, badly written legacy code that just works if you don't ever touch it, you will easily understand why expertise is such a valuable virtue.

You have one shot to accomplish anything. You have to do everything right, on time, on budget, and you have to compete in an aggressive worldwide market. Wonder why lately, games are more and more always the same? Why we tend to take no risks?

Experience works wonderfully in those conditions. Because most of the times you already know how to solve a given problem, you're an expert. And all the others, you can still see a reasonable solution. One you'll confident will work. You can quickly discard bad ideas, you understand if something is possible or not, and can estimate how much effort a given thing will take.

Experience is an effective noise filter, it filters out the average ideas other people usually have, and focus on the more productive and safe path. And it's exactly there, that things start going all wrong. You know that a given thing will work because you already made it work in the past, or because you worked on something similar, you can interpolate between things you've done and adapt them. 

You make great things possible, but totally discard the impossible ones. The impossible ideas, or even the wrong ideas, the unreasonable ideas, are the ones that geniuses have. Are the innovations, the change. Ironically, all game companies will have those words in their mission. But few do really know what that implies, and how you can encourage change. 
We've all seen impossible things being done. And then, when we were told the magic behind an idea, a technique, most of the times we see how trivial a given solution really is, if you dare to explore outside the limits of your experience.

"Alice laughed: "There's no use trying," she said; "one can't believe impossible things."
"I daresay you haven't had much practice," said the Queen. "When I was younger, I always did it for half an hour a day. Why, sometimes I've believed as many as six impossible things before breakfast."


This, from "Alice in wonderland", is a fundamental lesson for us to learn. The first time I realized its importance, was by looking at Crysis. One of the artists in my company used a tool (3dripper) to dump all the meshes, rendercalls and textures of a frame of the game. While looking at them, he found a texture that looked like ambient occlusion, but done dynamically, and showed it to us (programmers). 
I didn't know how that was possible, and before seeing it, I would not even have tried to do such a thing. But after I knew it was possible, it took me just a few days to come up with a shader, that then I found was really, really close to what they did. It used the depth buffer, and it did raytracing on it by sampling in steps, using a routine I adapted from relief mapping. All it took was to know it was possible.

Experience is not something that can be avoided. And it's something that indeed is useful. But as you become more and more experienced, make sure to always keep your mind open to new possibilities. 

Experiment. One good thing that helps, is always to be more curious than expert. Knowledge is noise, is made of ideas that you can mix together to create new things. Experience filters the noise. Keeping them at always in a good balance is fundamental.

Talk to outsiders. Make presentations, discuss with other people, juniors, artists, people that have a different view of the problem. Most of the times, their ideas won't directly help you, but they will shake your brain, force to have another look at the problem, give you that spark that can become a new solution. 
I've found myself often having new ideas, or finding better solution, by just presenting the work I've done. Most of the time, it's not even the discussion that helps, I have new ideas while I write the presentation, or while I explain it, even before we start debating about it.

14 January, 2010

Links

http://research.scee.net/files/presentations/gcapaustralia09/Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf: Explains also some of the choices I made here.

Java 7 performance optimizations: escape analysis (some is already in Java 6, as is the new G1 collector)

Disney PTex texture mapping library released under BSD license. Texturing without explicit UV mapping.

Eric Lippert's Fabulous Adventures in Coding. A lot of nice and entertaining information about the details of the C# language and compiler.

My current two favourite flash indie/art games: Windosill and Canabalt

Demo Tube 2 from Atom's Blog.

Kernghan/Ritchie/Lovecraft

11 January, 2010

How do you do your work?

How do you do your work? I would like to know.

Personally I don't think I really have a process. Or I never noticed one, I change my ways a lot. Sometimes I prototype straight away, sometimes I plan a lot ahead, I have very strong opinions on how things should be done, but in practice I follow more my experience than any set of formal rules. The only common trend is that I end up doing a lot of research on anything, and I tend to iterate a lot, interleaving my work with the artist's, trying to communicate a lot. But the details of how this is done vary a lot.

I'm messy, and I have a really bad memory. So, even if I like being messy, finding inspiration from many sources, I have to keep track in some way of what I'm doing and thinking, otherwise I'll forget everything after a while.

There are various aspects of this, it's almost a fight between trying to be creative, and trying to find an order among all the ideas and stuff I use. I don't have many rules, but in the end my workplace always ends up in a given shape.

The workspace. Every time I have to setup a new workspace, first of all I do plan for my computer.

Ergonomy is important, I plan the placement of the monitors first, as it's the most constrained due to the lighting conditions, that often are fixed in an office. Then I lay out all the things I like to have near me, because I access those often, and all the things I want to have far from me, because they are disturbing or noisy. Then I lay the cables.

I use a lot the space on the walls. I always have them full of prints, from reference images and stuff from my current project.
Also I have lots of unrelated stuff, comics, neat images, everything that I find interesting. There is some sort of order tho, I have a wall for game related stuff and one for research papers and stuff more directly linked to my current work. All the other spaces are more random.

Note taking is fundamental to me. I have two big whiteboards, and a lot of post-it blocks, notes, and a lot of different writing instruments. Same thing at home really. But I've found that that's not too good to keep track of things, just to play with ideas, doodle and tinker.
As an idea tracking device I currently use some needles to stack small post-its. While we have agile software and so, I still like to have something physical for my own ideas, everything that's not a task, yet.

Tinkering is important. I have lego bricks, scissors and knifes, and even small objects just to play with them, to have a tactile feedback. A colleague of mine once showed me this movie, that I find extremely good: Pollinate 2005, the common desk. Lifehacker also offers plenty of inspiration.

The computer. I like to keep my computer tidy, in some way, organized. First of all I care about the monitors. After I've found the best possible placement, in terms of light reflection on them, I adjust them to a comfortable height and angle, considering that I sit very low on my chair, having my arms all resting on the desk, and my keyboard and mouse, as far as possible, just under them, so I can work with my arms well stretched.

I currently have three monitors, one for the consoles, and two for the PC. I keep one horizontal and another in vertical (note: rotating the monitor does screw cleartype. It's still good enough on my setup, but you should know that. Also the VGA input is usually more blurry than DVI, a good idea is to calibrate your monitor if you're using it, using a fine bw checkerboard).
Then I adjust the monitors to the same brightness and white balance, trying to find something comfortable considering the ambient lighting, usually with an hardware calibrator but you can do the same by eye, using some reference charts.

On the software side, there are a few things I always do with a new PC. For readability I enable and tune cleartype, and I customize my colour schemes. There is much debate over those, I won't argue between light versus dark, there is a nice stackoverflow thread about that if you're interested (update: this is cool and this one is nice too)
There are a few utilities that I also always use. I try to be orthogonal in my choices and not to bloat my pc. Currently my must-haves are Launchy, Rockscroll (Metalscroll), Unlocker, 7Zip, Irfanview, Notepad++, Firefox with a few plugins, VLC and CCleaner. Some other nifty utils are Local History, Everything, GridMove, DropCloth, Ultramon and Beyond Compare.

Music is another fundamental part of my day. I use youtube a lot, for that some other coworkers of mine treasure groove salad.

Don't work. Actually, most of my work related ideas come when I'm not working. Go for a walk! Also explaining your idea to other people works a lot. Even if they won't provide any feedback and that's rare, nine times out of ten you'll gain more insight in the process. Write a presentation, talk. Even to yourself. Write a blog, even if it's messy like mine (even if I can't really talk about anything related to what I'm doing at work... so this is really a repository for future or personal or in other ways lame, ideas)

17 December, 2009

Lighting Compendium - part 1

Lighting is, still, one of the most challenging issues in realtime rendering. There is a lot of reserach around it, from how to represent lights to material models to global illumination effects.

Even shadows can't be considered solved for any but the simplest kind of lightsource (directional or sunlight, where using Cascaded Shadow Maps seems to be a de facto standard nowadays).

It looks like we have a pletheora of techniques, and choosing the best can be a daunting. But it you look a little bit closer, you'll realize that really, all those different lighting systems are just permutations of a few basic choices. And that by understanding those, you can end up with novel ideas as well. Let's see.

Nowadays you'll hear a lot of discussion around "deferred" versus forward rendering, with the former starting to be the dominant choice, most probably as the open world action-adventure-fps genere is so dominant.

The common wisdom is that if you need a lot of lights, deferred is the solution. While there is some truth in that statement, a lot of people accept it blindly, without much thinking... and this is obviously bad.

Can't forward rendering handle an arbitrary number of lights? It can't handle an arbitrary number of analytic lights, true, but there are other ways to abstract and merge lights, that are not in screen space. What about spherical harmonics, irradiance voxels, lighting cubemaps?

Another example could be the light-prepass deferred technique. It's said to require less bandwidth than the standard deferred geometry buffer one, and allow more material variation. Is that true? Try to compute the total bandwidth of the three passes of this method compared to the two of the standard one. And try to reason about how many material models you could really express with the information light-prepass stores...

It's all about tradeoffs, really. And to understand those, you have first to understand your choices.

Choice 1: Where/When to compute lighting.

Object-space. The standard forward rendering scenario. Lighting and material's BRDF are computed (integrated) into a single pass, the normal shading one. This allows of course a lot of flexibility, as you get all information you could possibly want to perform local lighting computation.
It can lead to some pretty complicated shaders and shader permutations as you keep adding lights and materials to the system, and it's often criticized for that.
As I already said, that's fairly wrong, as there's nothing in the world that forces you to use analytic lights, that require ad-hoc shader code for each of them. That is not a fault of forward rendering, but of a given lighting representation.
It's also wrong to see it as the most flexible system. It knows everything about local lighting, but it does not know anything about global lighting. Do you need subsurface scattering? A common approach is to "blur" diffuse lighting, scatter it on the object surface. This is impossible for a forward renderer, it does not have that information. You have to start thinking about multiple passes... that is, deferring some of your computation, isn't it?
Another pretty big flaw, that can seriously affect some games, is that it depends on the geometric complexity of your model. If you have too many, and too small triangles, you can incour in serious overdraw overheads, and partial-quads ones. Those will hurt you pretty badly, and you might want to consider offloading some of all your lighting computations to other passes for performance reasons. On the other hand, you get for free some sort of multiresolution ability, and that's because you can split easily your lighting between the vertex and pixel shaders.

Screen-space. Deferred, light-prepass, inferred lighting and so on. All based on the premise of storing some information on your scene in a screen-space buffer, and using that baked information to perform some of all of your lighting computations. It is a very interesting solution, and once you fully understand it, it might lead to some pretty nice and novel implementations.
As filling the screen-space buffers is usually fast, with the only bottleneck being the blending ("raster operations") bandwidth, it can speedup your shading quite a bit, if you have too small triangles leading to a bad quad efficiency (racap: current GPUs rasterize triangles into 2x2 pixel sample blocks, but quads on the edges have only some samples inside the triangle, all samples get shaded, but only the ones inside contribute to the image).
The crucial thing is to understand what to store in those buffers, how to store it, and which parts of your lighting compute out of the buffers.
Deferred rendering chooses to store material parameters and compute local lighting out of them. For example, if your materials are phong-lambert, then what does your BRDF need? The normal vector, the phong exponent, the diffuse albedo and fresnel colour, the view vector and the light vector.
All but the last are "material" properties, the light vector depends on the lighting (surprisingly), we store in the "geometry buffer", in screenspace the material properties, and then run a series of passes for each light, that provide the last bit of information and compute the shading.
Light-prepass? Well, you might imagine even without knowing much about it, that it chooses to store lighting information and execute passes that "inject" the material one and compute the final shading. The tricky bit, that made this technique not so obvious, is that you can't store stuff like the light vector, as in that case you would need a structure capable of storing in general, a large and variable number of vectors. Instead, light-prepass exploits the fact that some bits of light-dependent information are to be added together in the rendering equation for each light, and thus the more lights you have the more you keep adding, without needing to store extra information. For phong-lambert, those would be the normal dot view and normal dot light products.
Is this the only possible choice to bake in screenspace lighting without needing an arbitrary number of components? Surely not. Another way could be using spherical harmonics per pixel for example... Not a smart choice, in my opinion, but if you think about deferred in this way, you can start thinking about other decompositions. Deferring diffuse shading, that is the one were lighting defines shapes, and compute specular in object space? Be my guest. The possibilities are endless...
But where deferring lighting into multiple passes really shows its power, over forward rendering, is when you need to access non-local information. I've already made the example of subsurface scattering, and also on this blog I've talked (badly, as it's obvious and not worth a paper) about image-space gathering, that is another application of the idea. Screen-space ambient occlusion? Screen-space diffuse occlusion/global illumination? Same idea. Go ahead, make your own!

Other spaces. Why should we restrict ourselves to screen space baking of information? Other spaces could prove more useful, especially when you need to access global information. Do you need to access the neighbors on a surface? Do you want your shading complexity be independent of camera movements? Bake the information in texture space. Virtual texture mapping (also known as clipmaps or megatextures) plus lighting in texture space equals surface caching...
Light space is another choice, and shadow mapping is only one possible application. Bake lighting and you get the so called reflective shadow maps.
What about world-space? You could bake the lighting passing through a given number of locations and shade your object by interpolating that information appropriately. Spherical harmonic probes, cubemaps, dual paraboloid maps, irradiance volumes are some names...

Note about sampling. Each space has different advantages. Think how you can leverage them. Some spaces for example have some components that remain constant, while they would vary in others. Normalmaps are a constant in texture space, but they need to be baked every frame in screenspace. Some spaces enable baking at a lower frequency than others, some are more suitable for temporal coherency (i.e. in screenspace you can leverage on camera reprojection but other spaces you could avoid updating everything every frame). Hi-Z culling and multi-resolution techniques can be the key to achieve your performance criteria.

Ok, that's enough for now.

Next post I'll talk about the second choice, that is how to represent your lighting components (analytic versus table based, frequency versus spatial domain etc...) and how to take all those decisions, some guidelines to untangle this mess of possibilities...
Meanwhile, if you want to see a game that actually mixed many different spaces and techniques, to achieve lighting I'd suggest you to read about Halo 3...

19 November, 2009

Coding tactics versus design strategies

Today, while I was coming back home from work, I had a discussion with a colleage about one of our most important game tools: our animation system.

Said system is very big and has many features, it's probably one of our greater efforts and I doubt there is something more advanced out there. Now, it's even becoming a sort of game rapid prototyping thing, and it supports a few scripting languages, plus an editor, written in another language.

To make all those components communicate properly takes quite a bit of code, and so we needed to create another component that somewhats facilitates connecting the others.While we were discussing about the merits of techniques such as code generation versus code parsing, to link together different languages, it was clear that what was needed was indeed some sort of reflection, and that having said reflection would remove the need of also other parts of code (i.e. serialization).

So I went back home, and started thinking about why we didn't have that. Well, surely the problem had to be historical. Right now looking at our design, the "right" solution was obvious, but I knew that system started really, really small and evolved over the years.I realized actually that we didn't have in general a standard reflection system...

That's rather odd, as in many companies when you start creating your code infrastructure, reflection ends being one of the core components, and everyone uses that, it's more like a language extension, one of the many things you have to code to make C++ look less broken. We didn't have anything like that. We really don't have a core, we don't have an infrastructure at all!


Lack of strategy. We do have a lot of code. A lot. Many different tools, you won't believe how many, and I think that noone really can even view them all. We keep all those modules and systems in different repositories, really in different ogranizational structures with different policies and owners... It's huge and it can look messy.

To overcome the lack of a real infrastructure, some studios have their own standards, maybe a common subset of this huge amount of code that has been approved and tested as the base of all the products that studio makes. Some other studios do not do that, some other again do it partially.


Are we stupid? It looks crazy. I started thinking about how we could do better. Maybe, instead of choosing a subset of technologies that make our core and gluing them together with some bridge code and some test code, we could make our own copies of what we needed, and actually modify the code to live together more nicely. Build our infrastructure by copying and pasting code, modifying it, and not caring about diverging from the original modules.

But then what? It would mean that everything we modify will live in its own world, we can't take updates made by others, and we can't take other modules that depend on one of the pieces that we modified. And every game, to leverage on this new core, had basically yo be rewritten! Even cleaning up the namespaces is impossible! No, it's not a way that could be practical, even if we had the resources to create a team working on that task for a couple of years.


What went wrong? Nothing really. As bad as it might look, we know that it's the product of years of decisions, all of which I'm sure (or most of them) were sane and done by the best experts in our fields. We are smart! But... in the end it doesn't look like it! I mean, if you start looking at the code it's obvious that there was no strategy, the different pieces of code were not made to live together.


Is it possible to do better? Not really, no. We know that in software development, designing is a joke. You can't gather requirements, or better, requirements are something that you have to continuosly monitor. They change, even during the lifetime of a single product. How could we design technology to be shared... it's impossible!

Your only hope is to do the best you can do in a product, and then in another, and start to observe. Maybe there is some functionality that is common across them, that can be ripped out and abstracted into something shareable. Instread of trying to solve a general problem, solve a specific one, and abstract when needed. Gather. That's sane, it's the only way to work.


But then you get to a point were something started in a project, got ripped out because it was a good idea to do so, and evolves on its own across projects. Then another studio in the opposite side of the world sees that component, thinks its cool and integrates it. Integrates it together with its own stuff, that followed a similar path. The paths of those two technologies were not made to work together, so for sure the won't be orthogonal, they won't play nice. There will be some bloat. And the more you make code, promote it to a shareable module, and integrate other modules, the more bloat you get. It's unavoidable, but it's the only thing that you could do.


So what? We're looking at a typical problem. Strong tactics, good local decision, that do not lead over time to strong strategy. It's like a weak computer chess player (or go, chess is too easy nowadays). What's the way out of this? Well... do as strong computer chess programs do! They evaluate tactics over time. They go very deep, and if they find that the results are crap, they prune that tree, they trash some of their tactical decisions and take others. Of course computer chess can go forward in time and then back, wasting only CPU time.

We can't go back, but we can still change our pieces on the chessboard. We can still see that a part of the picture is going wrong and delete it... at least if we took the only important design decision out there: making your code optional. That's the only thing you have to do, you have to be sure to work in an environment where decisions can be changed, code can be destroyed and replaced. Two paths, two different technologies after ten years intersect. Good. They intersect too much? They become bloated? You have to be able to start a new one, that leverages on the experience, on the exploration done. But that is possible only if everything else does not deeply depend on those two.


Tactics are good. Tactics are your only option, in general. If you're small, have little code, have a few programmers, then you might live in the illusion that you can have a strategy. You're not, it's only that strong tactics at that size, look like a strategy. It's like playing chess on a smaller board, the same computer player that seemed weak, becomes stronger (even more clear again, with go) *. And of course that's not bad.

Some design is not bad, drawing the overall idea of where you could be going... Like implementing some smarter heuristics for chess. It's useful, but you don't live with the idea that it's going to be the solution. It can improve your situation by a small factor, but overall you will still need to bruteforce, to have iterations, to let things evolve. Eventually, over the years, relying on smart design decision is not what is going to make a difference. They will turn bad. You have to rely on the idea that tactics can become strategy. And to do that, you have to be prepared to replace them, without feeling guilty. You've explored the space, you've gathered information (Metropolis sampling is smart).

---

* Note: that's also why a lot of people, smart people, do not believe me when I say that stuff like iteration, fast iteration, refactoring, dependency elimination, languages and infrastructures that support those concepts, are better than a-priori design, UML and such. They have experience of too little worlds (or times). I really used to think in the same way, and even now it's very hard for me to just go and prototype, to ignore the (useless and not achievable) beauty of a perfect design drawn on a small piece of paper. We go to a company, or get involved into a project, or have experience of a piece of code. We see that there is a lot of crap. And that we could easily have done better! Bad decisions everywhere, those people must be stupid (well... sometimes they are, I mean, some bad decisions were just bad of course). Then we make our new system, we trash the old shit, and live happily. If the system is small enough, and the period of time we worked on it is small enough, we will actually feel we won... We didn't, we maybe took the right next move, a smart tactical decision. I hope that it didn't took too long to make it... because anyway, that's far from winning the match! But it's enough to make us care way too much about how to take that decision, how to make that next move, and not see that the real match does not care much about that, that they are not even fighting the big problem. It's really hard to understand all that, I've been lucky in my career, as I got the opportunity to see the problems at many different scales.