No, I've not been (at least, yet) laid off. Quite the contrary, our game is reaching alpha, a state where in theory we should work only on bugfixing, optimization and tuning. In practice, this status is reached for most games only when they ship, some are not so lucky and features are still added (not considering expansions, that would be separate products) with patches.
But it was fun to notice that I started receiving more job offers in this period, of companies intrested into getting good engineers, freshly cut... Oh well.
I also had a couple of more interesting things to post, but just today my macbook died. Ok, I know that sounds like "aliens ate my demomaker" but yeah, it's real and I'm really pissed off by that.
An overused analogy
I've always been fascinated by how much good coding looked like good paiting. Some start with a preliminary sketch, some do not, I don't think it's so important. The iterative process is. You first start to draw general light volumes and set the main colours, the mood of the picture. Then you go down and start to nail down shapes, focusing on the main parts first, bodies, faces. Eventually you'll change some eary decisions, because they don't look quite right as the picture progresses. Then you go into the details more and more until you're happy, trying not to overdo it.
Painting has a process, and you'll find the same in most of the crafts, from sculpture to music. The media does not matter much, it's just like programming languages are to us. Some ideas are better expressed in one media, some in another, but in the end it doesn't matter too much...
If we have to look at the media, then sculpting materials can be used for some quite nice analogies, some are harder to shape, but they allow for finer detail (think about marble), some others are fast and can be used to make casts for more solid works, some are obsolete, with fewer and fewer masters being able to work with them.
C is marble, Michelangelo can create the most incredible works out of it, but it requires an incredible mind, a lot of muscles, lots of preliminary drawings and many years. We, normal people, should focus on something that's more easy to change, but that can still be hardened enough to suit our needs... Anyway...
So coding and art, both deal with processes, while still allowing for a lot of freedom and creativity. Why we don't see quite as much structure in our rendering work then? Are we too young? Even the word "creativity", it scares us, but should it?
Creativity is such a taboo word in programming because it evokes things you can’t control, but it’s quite the opposite, you can, artists do.
I really think that a weekend spent learning painting or watching how artist paint would be really helpful to learn about a good workflow. Artists are not messy at all, they have a very organized and evolved, refined way to creativity.
In my experience, the rendering work of a game starts, if you're lucky, by gathering references, compiling a feature list, and making mockups of what the end result should look like. From there, engineers make estimates, a plan is made, something get cut early, and then the project starts.
Agile or not often does not make a fundamental difference in the way rendering evolves. It should do, and it changes the way your daily work progresses, but often it does not change our perspective at a more fundamental level. Features are engineered, changes happen, people get more and more busy until everything is done. Now take screenshots of the game as it goes through all those steps. For sure you'll notice (well, hopefully) that it's improving over time. But can you notice a pattern? A process? In my experience most of the times, no, you can't. Is rendering too young to have one?
Things get done in a technical order, based on their risk. If it's new and it's hard, it will be done first. But in this way, how can you know what you are doing? How do you know that what you did is correct, it fits into the picture quite right... It's like drawing some detailed eyes in a black canvas, without knowing where they will be, on which face, with which light, or mood...
Short story
Recently I was involved in some very intresting discussions on normals. Without going into details that do not matter nor am I allowed to write, I can say that we have bad normals. Everyone does, really! Simply, no matter how you transform your meshes, if it's not rigid, and if you're using the same transform on vertices and on their normals, you'll be wrong. Smooth normals are a big pain in the ass, they're totally fake and they're computed by averaging face normals, that depend on many vertices, often in a non trivial way (as usually, the mean in weighted considering some heuristics).
I never realized that before.
Actually I was thinking that our fault was somewhere else in the math we were using, but I was proven wrong, so well, that's it. And as all the solutions seem to be too expensive or too risky to try, I suspect we'll live with it.
What's more interesting is that some time later, another defect was found in another area. As it was discovered by lighting artists, they tried their best to fix it by changing our many lighting variables, but they failed. Eventually someone thought that it could be due to that normal problem and I was involved into the discussion. After a while, it was found that no, it's not lighting nor normals, it's just the animation that's wrong, the shape itself gets wrong. In a subtle way, but enough to cause a big loss of realism.
Where do I want to go with that? Our problem is that we don't build things on a stable, solid base. Not from an engineering perspective (in that case, the solid base should be provided by correct maths, and understood approximations on physical phenomena), nor from a rendering one. We build things on hacks, find visual defects and add more hacks to fix them, in a random order. It's impossible to predict what's affecting what, shaders have hundred of parameters, nothing is correct, so you can't tell when you're making an error.
We should learn, again, from art. Start with shapes, and basic lights (i.e. no textures, only gouraud shading, shadows and alpha maps). Make sure that shapes do work, that work even under animation, and provide means to always check that.
Then focus on the main shaders, on the material they should model. Find out what lights do we need, photography helps, key, fill, background... And when something does not look right, do not fix it, but find out first why it does not.
If it's too saturated, the solution is not to add a desaturation constant, but first to check out why. Are we taking a color to an exponent? What sense that operation has? Why we do that? Is the specular too bright? Why? Is our material normalized, or does it reflect more light than the one it receives?
Addendum - update:
In the non realtime rendering world, one of the advantages of unbiased estimators of the rendering equation is to be able to compute the error, and the convergence, of the image. In our world, there is no notion of error. Because nothing is correct, nor consistent. Many times there aren't even lights nor materials, but pieces of code that emit colors.
In the end we have lots of parameters and techniques, and an image that does not feel right. If our artists are very skilled, by looking at the reference pictures, they might identify the visual defect that makes the image not look right. Specular is wrong. Hue changes in the skin are incorrect. If we are able to find those problems (not trivial) finding the cause is always impossible. Wrong normals? The subsurface scattering is badly tuned, or it's the diffuse map? Or it's the specular of the rimlights? The only option is to add another parameter of another hack that makes the situation less worse... In that case, but probably add other visual defects somewhere else, and for sure it adds complexity...
It's the same thing that happens with bad, rotten code, the only difference is that this time is our math and our processes that are bad, not the resulting code, so we're less trained to recognize it...
But it was fun to notice that I started receiving more job offers in this period, of companies intrested into getting good engineers, freshly cut... Oh well.
I also had a couple of more interesting things to post, but just today my macbook died. Ok, I know that sounds like "aliens ate my demomaker" but yeah, it's real and I'm really pissed off by that.
An overused analogy
I've always been fascinated by how much good coding looked like good paiting. Some start with a preliminary sketch, some do not, I don't think it's so important. The iterative process is. You first start to draw general light volumes and set the main colours, the mood of the picture. Then you go down and start to nail down shapes, focusing on the main parts first, bodies, faces. Eventually you'll change some eary decisions, because they don't look quite right as the picture progresses. Then you go into the details more and more until you're happy, trying not to overdo it.
Painting has a process, and you'll find the same in most of the crafts, from sculpture to music. The media does not matter much, it's just like programming languages are to us. Some ideas are better expressed in one media, some in another, but in the end it doesn't matter too much...
If we have to look at the media, then sculpting materials can be used for some quite nice analogies, some are harder to shape, but they allow for finer detail (think about marble), some others are fast and can be used to make casts for more solid works, some are obsolete, with fewer and fewer masters being able to work with them.
C is marble, Michelangelo can create the most incredible works out of it, but it requires an incredible mind, a lot of muscles, lots of preliminary drawings and many years. We, normal people, should focus on something that's more easy to change, but that can still be hardened enough to suit our needs... Anyway...
So coding and art, both deal with processes, while still allowing for a lot of freedom and creativity. Why we don't see quite as much structure in our rendering work then? Are we too young? Even the word "creativity", it scares us, but should it?
Creativity is such a taboo word in programming because it evokes things you can’t control, but it’s quite the opposite, you can, artists do.
I really think that a weekend spent learning painting or watching how artist paint would be really helpful to learn about a good workflow. Artists are not messy at all, they have a very organized and evolved, refined way to creativity.
In my experience, the rendering work of a game starts, if you're lucky, by gathering references, compiling a feature list, and making mockups of what the end result should look like. From there, engineers make estimates, a plan is made, something get cut early, and then the project starts.
Agile or not often does not make a fundamental difference in the way rendering evolves. It should do, and it changes the way your daily work progresses, but often it does not change our perspective at a more fundamental level. Features are engineered, changes happen, people get more and more busy until everything is done. Now take screenshots of the game as it goes through all those steps. For sure you'll notice (well, hopefully) that it's improving over time. But can you notice a pattern? A process? In my experience most of the times, no, you can't. Is rendering too young to have one?
Things get done in a technical order, based on their risk. If it's new and it's hard, it will be done first. But in this way, how can you know what you are doing? How do you know that what you did is correct, it fits into the picture quite right... It's like drawing some detailed eyes in a black canvas, without knowing where they will be, on which face, with which light, or mood...
Short story
Recently I was involved in some very intresting discussions on normals. Without going into details that do not matter nor am I allowed to write, I can say that we have bad normals. Everyone does, really! Simply, no matter how you transform your meshes, if it's not rigid, and if you're using the same transform on vertices and on their normals, you'll be wrong. Smooth normals are a big pain in the ass, they're totally fake and they're computed by averaging face normals, that depend on many vertices, often in a non trivial way (as usually, the mean in weighted considering some heuristics).
I never realized that before.
Actually I was thinking that our fault was somewhere else in the math we were using, but I was proven wrong, so well, that's it. And as all the solutions seem to be too expensive or too risky to try, I suspect we'll live with it.
What's more interesting is that some time later, another defect was found in another area. As it was discovered by lighting artists, they tried their best to fix it by changing our many lighting variables, but they failed. Eventually someone thought that it could be due to that normal problem and I was involved into the discussion. After a while, it was found that no, it's not lighting nor normals, it's just the animation that's wrong, the shape itself gets wrong. In a subtle way, but enough to cause a big loss of realism.
Where do I want to go with that? Our problem is that we don't build things on a stable, solid base. Not from an engineering perspective (in that case, the solid base should be provided by correct maths, and understood approximations on physical phenomena), nor from a rendering one. We build things on hacks, find visual defects and add more hacks to fix them, in a random order. It's impossible to predict what's affecting what, shaders have hundred of parameters, nothing is correct, so you can't tell when you're making an error.
We should learn, again, from art. Start with shapes, and basic lights (i.e. no textures, only gouraud shading, shadows and alpha maps). Make sure that shapes do work, that work even under animation, and provide means to always check that.
Then focus on the main shaders, on the material they should model. Find out what lights do we need, photography helps, key, fill, background... And when something does not look right, do not fix it, but find out first why it does not.
If it's too saturated, the solution is not to add a desaturation constant, but first to check out why. Are we taking a color to an exponent? What sense that operation has? Why we do that? Is the specular too bright? Why? Is our material normalized, or does it reflect more light than the one it receives?
Addendum - update:
In the non realtime rendering world, one of the advantages of unbiased estimators of the rendering equation is to be able to compute the error, and the convergence, of the image. In our world, there is no notion of error. Because nothing is correct, nor consistent. Many times there aren't even lights nor materials, but pieces of code that emit colors.
In the end we have lots of parameters and techniques, and an image that does not feel right. If our artists are very skilled, by looking at the reference pictures, they might identify the visual defect that makes the image not look right. Specular is wrong. Hue changes in the skin are incorrect. If we are able to find those problems (not trivial) finding the cause is always impossible. Wrong normals? The subsurface scattering is badly tuned, or it's the diffuse map? Or it's the specular of the rimlights? The only option is to add another parameter of another hack that makes the situation less worse... In that case, but probably add other visual defects somewhere else, and for sure it adds complexity...
It's the same thing that happens with bad, rotten code, the only difference is that this time is our math and our processes that are bad, not the resulting code, so we're less trained to recognize it...
3 comments:
Awesome post, your last three paragraphs in particular, everyone should read and understand!
http://www.paulgraham.com/hp.html
Nick: thanks
AA: I know hackers and painters by Paul Graham, but I don't love what he writes and I never finished to read that post he did, that's why I didn't like it to my post :)
Post a Comment