Rendering is going through a very illiterate era. Innumerical, really.
We had the software rendering era, triangle filling was hard, ninja-coders knew how to count instruction cycles and wrote routines in assembly. Stupid errors were made due to lack of mathematical knowledge, but they were not so important (even if I do think that knowing maths also helps optimization...)
Graphics was full of cheap hacks anyway. There's nothing too wrong in cheap hacks, as long as they look good, and if you can't do any better. We will always use hacks, because there will always be things that we can't do any better.
Light is simply too complex to get it completelly right. And we don't know too much about it too.
Still, you should know that you're doing a cheap hack. The main point is in knowing your limitations. The only thing that you know about a graphic hack is that you can't know its limits. Because you didn't start with any reasonable model, and so you can't tell which things that model is able to accurately simulate and what not, and more, you can't tell when you simplify a model how much error you have and in which cases you are committing an error.
When you know all that, you're moving from hacking to approximations, that are a way more refined tool.
Today, we have a lot of computing power. Computer graphics is relatively easy. Anyone can display a spinning cube with a few lines of code, in any language. So you would guess that we are dealing less with hacks and more with maths right? We should be more conscious, as we don't have to be concerned anymore with some implementation details like how to draw a triangle (fast enough). Our first optimization should be in the algorithms.
Well, unfortunately, it's not so. Not in the slightest. We actually most of the times managed to forget about the hacks we used to do and just assume that is the way things work. We are in a pop era. And don't get me wrong, I love pop, there are many geniuses in it, but we shouldn't be limited only to it!
We recently discovered that we did not know anything about color. We were producing colors, but we did not know anything about them. And we still don't know. I guess that most rendering engineers just skimmed through the concepts of gamma, looked how they could fix that in the shader or using appropriate samplers and render targets, and hoped really badly that noone will discover any other obvious flaws in their work.
More or less the same is happening with normals now. How many people asked themselves what normals are? What vector operators do on normals? What are we doing? Recently, someone tried to answer those questions. It's not a complete answer, but there are a few nice pictures, and again, everyone seems happy. How many people actually questioned that interpretation of normal vectors that Loviscach did? How many people are using his formulas consciously?
People encode normal data into two-channel textures by storing x/z and y/z, as it's faster to decompress (you just need to normalize(float3(tex2D(sampler,UV).xy,1)) and you're done) and maybe because they saw some pictures demonstrating that the error distribution of this technique is more even across the hemisphere. Who cares that you can't (easily) encode any normal that has an angle wider than 45° from the z axis? Maybe you don't really care, and that error is something you can afford. But still you should know...
You should know that your linear or anisotropic filter in the sampler is going to do an average of your normal data. And averaging does not preserve length, so you will end up with unnormalized normals. Well, that's easy, just normalize them right? Yeah, that's the least you can do. Every operation that denormalizes something, you can fix it with normalize. But what are you doing? Who knows.
Actually you shouldn't care less about normalizing, you could do that only at the end, in your shading formula. Of course you can easily build an algebra of vectors on the two-sphere (unit vectors) by taking your familiar linear algebra operators and appending a normalize to all of them. But what are you doing? We use normals as directions, we should have our operations that are linear on the direction space. If an operation denormalizes our vector, but is leaves it still pointing to the correct direction, it's quite fine! If it leaves it normalized, but on a wrong direction, it is not.
Actually, the only way we can avoid the filtering error, is by encoding angles and not cartesian coordinates.
So don't normalize! Know your operations. Know your errors! Or if you don't, then don't try to be "correct" in an unknown model. Just hack, and judge the final visual quality! Know your ignorance. But be conscious, don't code randomly.
And of course those are only examples. I can make a countless number of them. I've seen a axis-aligned bounding-box class using taking as input Vector4. Well, in fact, some functions were using Vector4, some other Vector3, some other were overloaded for both. What are you trying to say? That the AABB is four-dimensional? I don't think so. That you are handling 3d vectors in an homogeneous space? Maybe, but after checking the code, no, they did not correctly handle homogeneous coordinates... Actually, I will be really surprised to see a math library were Vector4 is actually intended for handling that... Well, I'm just digressing now...
The problem is, unfortunately, that errors in graphics usually won't make your code crash. It will just look fine. Maybe, it will look a litlle bit CG-ish, or you won't be able to replicate the look you were for. Maybe you'll blame your artists. Probably you will be able to see in what cases it does not look good, and add some more parameters to "fix" those errors, until you don't have enough computing power to run everything or your artists go crazy trying to find a value for all the parameters you added. Or you'll wait for more advanced algorithms...
Yeah, raytracing surely will save us all, it's physically correct, isn't it?
P.S. Reading this (and all) my articles is a lot more intresting if you care to follow the links...
We had the software rendering era, triangle filling was hard, ninja-coders knew how to count instruction cycles and wrote routines in assembly. Stupid errors were made due to lack of mathematical knowledge, but they were not so important (even if I do think that knowing maths also helps optimization...)
Graphics was full of cheap hacks anyway. There's nothing too wrong in cheap hacks, as long as they look good, and if you can't do any better. We will always use hacks, because there will always be things that we can't do any better.
Light is simply too complex to get it completelly right. And we don't know too much about it too.
Still, you should know that you're doing a cheap hack. The main point is in knowing your limitations. The only thing that you know about a graphic hack is that you can't know its limits. Because you didn't start with any reasonable model, and so you can't tell which things that model is able to accurately simulate and what not, and more, you can't tell when you simplify a model how much error you have and in which cases you are committing an error.
When you know all that, you're moving from hacking to approximations, that are a way more refined tool.
Today, we have a lot of computing power. Computer graphics is relatively easy. Anyone can display a spinning cube with a few lines of code, in any language. So you would guess that we are dealing less with hacks and more with maths right? We should be more conscious, as we don't have to be concerned anymore with some implementation details like how to draw a triangle (fast enough). Our first optimization should be in the algorithms.
Well, unfortunately, it's not so. Not in the slightest. We actually most of the times managed to forget about the hacks we used to do and just assume that is the way things work. We are in a pop era. And don't get me wrong, I love pop, there are many geniuses in it, but we shouldn't be limited only to it!
We recently discovered that we did not know anything about color. We were producing colors, but we did not know anything about them. And we still don't know. I guess that most rendering engineers just skimmed through the concepts of gamma, looked how they could fix that in the shader or using appropriate samplers and render targets, and hoped really badly that noone will discover any other obvious flaws in their work.
More or less the same is happening with normals now. How many people asked themselves what normals are? What vector operators do on normals? What are we doing? Recently, someone tried to answer those questions. It's not a complete answer, but there are a few nice pictures, and again, everyone seems happy. How many people actually questioned that interpretation of normal vectors that Loviscach did? How many people are using his formulas consciously?
People encode normal data into two-channel textures by storing x/z and y/z, as it's faster to decompress (you just need to normalize(float3(tex2D(sampler,UV).xy,1)) and you're done) and maybe because they saw some pictures demonstrating that the error distribution of this technique is more even across the hemisphere. Who cares that you can't (easily) encode any normal that has an angle wider than 45° from the z axis? Maybe you don't really care, and that error is something you can afford. But still you should know...
You should know that your linear or anisotropic filter in the sampler is going to do an average of your normal data. And averaging does not preserve length, so you will end up with unnormalized normals. Well, that's easy, just normalize them right? Yeah, that's the least you can do. Every operation that denormalizes something, you can fix it with normalize. But what are you doing? Who knows.
Actually you shouldn't care less about normalizing, you could do that only at the end, in your shading formula. Of course you can easily build an algebra of vectors on the two-sphere (unit vectors) by taking your familiar linear algebra operators and appending a normalize to all of them. But what are you doing? We use normals as directions, we should have our operations that are linear on the direction space. If an operation denormalizes our vector, but is leaves it still pointing to the correct direction, it's quite fine! If it leaves it normalized, but on a wrong direction, it is not.
Actually, the only way we can avoid the filtering error, is by encoding angles and not cartesian coordinates.
So don't normalize! Know your operations. Know your errors! Or if you don't, then don't try to be "correct" in an unknown model. Just hack, and judge the final visual quality! Know your ignorance. But be conscious, don't code randomly.
And of course those are only examples. I can make a countless number of them. I've seen a axis-aligned bounding-box class using taking as input Vector4. Well, in fact, some functions were using Vector4, some other Vector3, some other were overloaded for both. What are you trying to say? That the AABB is four-dimensional? I don't think so. That you are handling 3d vectors in an homogeneous space? Maybe, but after checking the code, no, they did not correctly handle homogeneous coordinates... Actually, I will be really surprised to see a math library were Vector4 is actually intended for handling that... Well, I'm just digressing now...
The problem is, unfortunately, that errors in graphics usually won't make your code crash. It will just look fine. Maybe, it will look a litlle bit CG-ish, or you won't be able to replicate the look you were for. Maybe you'll blame your artists. Probably you will be able to see in what cases it does not look good, and add some more parameters to "fix" those errors, until you don't have enough computing power to run everything or your artists go crazy trying to find a value for all the parameters you added. Or you'll wait for more advanced algorithms...
Yeah, raytracing surely will save us all, it's physically correct, isn't it?
P.S. Reading this (and all) my articles is a lot more intresting if you care to follow the links...
5 comments:
Something related with the idea of your post happens to me when Im implementing a cg algorithm. I ever reach a point in the development of cg algorithms where, from a visual point it looks ok but from a
"correctness point" I feel that its not. I try to not stop there and I continue the development until I get to a point where I feel comfortable enough with the "correctness" of the algorithm.
The problem is that sometimes with the pressure of a production and the deadlines, you only have time to do a visual ok implementation.
Correctness is not an issue. We will always make errors is you think that the correct thing is how the light behaves (of course I'm talking of material shaders here). We don't even know how the light behaves, exactly.
We are just building models (that artists try to fit to their ideas tweaking parameters). But we should know them, we should know why we make our errors, be conscious.
amazing post, great job Angelo
I assume you were joking about raytracing.
Of course. Realtime raytracing is lame.
Post a Comment