This is a comment on this neat post by Christer Ericson (so you're supposed to follow that link before reading this). I've posted that comment on my blog because it lets me elaborate more on that, and also because I think the subject is important enough...
So basically what Christer says is that graphical (i.e. graph/node based) shader authoring systems are bad. Shaders are performance critical, should be authored by programmers. Also, it makes global shader changes way more difficult (i.e. remove this feature X from all the shaders... now it's impossible because each shader is a completely unrelated piece of code made with a graph).
He proposes an "ubershader" solution, a shader that has a lot of capabilities built into, that then gets automagically specialized into a number of trimmed down ones by tools (that remove any unused stuff from a given material instance)
I think he is very right, and I will push it further…
It is true that shaders are performance critical they are basically a tiny kernel in a huuuge loop, tiny optimizations make a big difference, especially if you manage to save registers!
The ubershader approach is nice, in my former company we did push it further, I made a parser that generated a 3dsmax material plugin (script) for each (annotated) .fx file, some components in the UI were true parameters, others were changing #defines, when the latter changed the shader had to be rebuit, everything was done directly in 3dsmax, and it worked really well.
To deal with incompatible switches, in my system I had shader annotations that could disable switches based on the status of other ones in the UI (and a lot of #error directives to be extra sure that the shader was not generated with mutually incompatible features). And it was really really easy, it's not a huge tool to make and maintain. I did support #defines of “bool”, “enum” and “float” type. The whole annotated .fx parser -> 3dsmax material GUI was something like 500 lines of maxscript code.
We didn't have just one ubershader made in this way, but a few ones, because it doesn't make sense to add too many features to just one shader when you're trying to simulate two completely different material categories... But this is not enough! First of all, optimizing every path is still too hard. Moreover, you don’t have control over the number of possible shaders in a scene.
Worse yet, you loose some information, i.e. let’s say that the artists are authoring everything well, caring about performance measures etc... In fact our internal artists were extremely good at this. But what if you wanted to change all the grass materials in all your game to use another technique?
You could not, because the materials are generic selections of switches, with no semantic! You could remove something from all the shaders, but it's difficult to replace some materials with another implementation, you could add some semantic information to your materials, but still you have no guarantees on the selection of the features that artists chosen to express a given instance of the grass, so it becomes problematic.
That’s why we intended to use that system only as a prototype, to let artists find the stuff they needed easily and then coalesce everything in a fixed set of shaders!
In my new company we are using a fixed sets of shaders, that are generated by programmers easily usually by including a few implementation files and setting some #defines, that is basically the very same idea minus the early-on rapid-prototyping capabilities.
I want to remark that the coders-do-the-shaders approach is not good only because performance matters. IT IS GOOD EVEN ON AN ART STANDPOINT. Artists and coders should COLLABORATE. They both have different views, and different ideas, only together they can find really great solutions to rendering problems.
Last but not least having black boxes to be connected encourages the use of a BRDF called "the-very-ignorant-pile-of-bad-hacks", that is an empirical BRDF made by a number of phong-ish lobes modulated by a number of fresnel-ish parameters that in the end produce a lot of computation, a huge number of parameters that drive artists crazy, and still can't be tuned to look really right...
The idea of having the coders do the code, wrap it in nice tools, and give tools to the artists is not only bad performance-wise, it’s bad engineering-wise (you most of the time spend more resources into making and maintaining those uber tools than the one you would spend by having a dedicated S.E. working closely with artists on shaders), and it’s bad art-wise (as connecting boxes has a very limited expressive power).
5 comments:
It is clear from your post that you (both you and Christer) make wild assumptions how such systems are implemented for games and then present these assumptions as flaws. I would recommend investigating/or working with good implementations of a graph shader system in a game engine before arguing against or for them.
Believe me, many of your arguments are flawed simply because you assume a naive implmenetation.
Yours Truly, I would like to believe you, but you don't provide any evidence about my arguments being flawed. So I don't know which part of my post you see as "naive".
That shaders are performance critical? Well that's a fact really, 90% of the GPU execution time is in the shaders, the Larrabee paper provides some intresting details about that.
That hand-optimizing such shaders provides a better performance than tool-generated ones? Well, in my experience that's really true. Even if shader code is easy to machine-optimize, still there are plenty of things that can't be automatically done like changes in the packing of the constants and textures, algorithmical improvements, load balancing between vertices and pixels, between number of threads and number of instructions, coordinate system changes etc... I've often seen 2-3x improvements in performance of shader code after hand-optimizations.
Maybe you argue that it's wrong that graph-based systems provide less semantic information than switch-based ones, than are still worse than hand-picked ones? And that this lack of sematic makes late, global changes more difficoult? That also seems to me pretty evident...
I pushed my opinions further than Christer did, and actually said that graph-based systems are bad in theory too, as they encourage the wild use and mix of empirical BRDF models... In my personal experience that's true, and it really makes the difference between CG-look and realism (and between having 100 parameters and 10, too)...
Last but not least, even if technical artists with advanced graph based systems could really do a fine work, and in many cases they do, I hate that approach, it's pushing artists and coders far away each other.
Those advanced tools not only have a number of drawbacks, not only require a lot of care to be used properly, but I've seen them in manay many cases be a serious productivity problem.
In my company we have a super-advanced animation system, that's totally cool. It comes with a huge C# tool to let technical animators author logic, connecting animations together, emitting and receiving signals and stuff like that. The idea was always the same, let artists modify the game with no coders effort.
The result is, even if we have very skilled animators, and animation is the biggest art department here, that artists make a lot of "bugs", coders are still required to fix those, but they have to use tools instead of code, and they hate it, they don't want to learn such tools, plus making code become slower, because each change in the animation system now have also to update the tools and involves touching a huge framework...
Even if your experience is different, I still think that those kind of systems are not the way to go, the way to go is having coders working tightly together with artists, cross-breeding ideas, using science and creativity, and that we should try to make this kind of paired work more efficient, not decouple it. That means that coders should be enabled to code faster, and artists to tune the parameters of the code easily. That means that we need fast iteration, small, agile frameworks, easy to be changed, ideally live-coding.
I worked for some time in the tools, and I'm pretty skilled now with 3dsMax scripting and stuff like that. I've almost always regret my decisions of building huge, generic tools. Most of the time is so much easier to work with the artists and write a small script tailored at a given need. We are too optimistic when we have to evaluate reusability... well but now I'm writing too much, and I don't want to digress from my main point.
First, I did not call your post naive. You obviously know what you are talking about and many of your arguments are sound.
What I called naive was the implementation of graph shader system you assume and discuss. For example, if you look at the unreal shader system you will see that the BRDF is burried under the graph system - artists have little control over that.
Further you argue optimisation is key, and I agree, but again you just assume a graph based system is going to be implemented naively. In practice about 10% of the code of shader is generated from a graph system and the rest 90% comes from the shader framework back end, lighting/BRDF/ etc. Even those 10% are actually written by programmers although the links are provided from the graph. In reality the code generated from a graph system is written by a shader programmer and can be just as hand optimised to be fast.
Some graph based shader systems can provide more semantics than handmade ones and most oftenly the process is more automated than with manual ones. (this of course is based on my experiece with naive manual shader implementations ;)
The key advantage of graph shader system is maintenance - i.e. the artists dont have to ask a programmer every time they want to make a minor change to the shaders. For major changes (including BRDF changes) the programmers still need to be involved.
Finally, I completely agree that the data-driving everything results in more overhead for programmers - data driven bugs are more difficult to find and fix. So, my point is, choose very carefully what system you make data driven and what part of it you expose to the artists. For your argument you choose to believe people would expose everything in the shader to a graph based system - in reality many people, who implement such systems, know that would be wrong and choose carefully what is exposed and how.
I hope you see my point: game engine shader graph system != DCC tools shader graph system.
I guess you're right, if you constrain the graph system enough you can achieve a good balance.
But then the advantages of having a graph-based system become thinner, if a programmer is still involved for any non-trivial change. It starts to look more like a switch based system...
And still you loose semantic, and still you don't have an easy way to control packing and all the tiny details that can make a huge difference (i.e. declaring a constant in a way or in another, that could save a register and have the shader to be scheduled on more threads...).
If you restrict the changes you can make to the number of used textures well, my #define based shader system supported up to four texture layers and artists could choose any blend-mode across them plus having a constant multiplier/opacity parameter, they had a lot of flexibility in that respect... And yes, if they needed more they had to ask me, so I could go there, see what they wanted, think about the possible solutions, discuss with them about alternatives etc...
This is something that I don't think it's good because it restricts artist's domain, I think it's good because it improves the overall game quality!
Lastly, of course when I write a post, and I think the same is also for Christer, saying "X is bad" it doesn't mean that is completely unusable, I know that there are people that use that to a good degree, but still I hope that readers can see my point...
Hi, this subject is very touchy but I though I'd share my own experience. To give a bit of context on my opinion here, I have been a game developper for 8 years now and I have seen the tech evolve. I have been the rendering lead engineer on a triple-A PS3/Xbox360 game that shipped earlier this year using the Unreal engine. So, to satisfy Yours Truly's requirement, I know all about Unreal's implementation of the node graph shader base system. All the flaws of this system are very well pointed out by DeadC0de. I personnally have suffered greatly because of this system (well, our use of it actually). In our last project, we had around 4000 different materials (imagine that!), created by some 20-30different artists, each with a set of shaders for depth pass, emissive pass, light&shadow passes and counting all the vertex factories, resulting in, say, 100 real shaders per material. The result of this is most of those materials are sub-optimal performance wise and also memory wize. Optimizing those for PS3 was a real challenge. We managed to optimize some of them in critical areas, but this cost us many precious months of profiling and optimization (really negating all advantages of the system). The game slipped 3 months, not only because of this, but in part because of this. And did we even get better looking shaders out of it? I don't think so. No artist was able to bring the best out of the system. They felt empowered and all, but in the end, their shaders where no better looking than simpler ones.
So, all this to say, in real life, with the current situation, a node graph based shader system for a game is a very very dangerous thing. YOU CAN'T LET IT GET OUT OF HAND. Rules have to be enforced (and when you do that, it means your system is flawed!). I am currently working on the sequel to the game, still using Unreal engine. This time around, we have 2experienced shader artists that create the parent materials (equivalent to uber shaders) using the material instance system in Unreal. Artists are not allowed to create materials anymore (!!). They must use one of the parent material in a fixed set and instance it, effectively creating an ubershader with parameters and check boxes system. This way, we have fixed the uncontrolled proliferation of materials/shaders. Also, optimizing the parent material will optimize all the instanced materials, which is pretty neat. The parent materials do not require engineering support to be done, technical artists can do it and verify performance on the consoles using performance tools.
In summary, this actually boils down to a ubershader approach (implemented by technical shader artists instead of fully fledge engineers) and proves, to me at least (through those past 3 years of struggle), that a node-graph based shader system cannot work in real life game production situation. It really boils down to the points Deadcode (and Christer) are making. So far, the uber-shader (or uber-materials) approach has worked really well. We have cleared pre-pro and are now in full production and the number of shaders is quite stable. So I would suggest, to anyone using Unreal, to use the material instance system. I think it is the best hybrid approach where you can still expose the freedom of node based system to experienced technical shader artists and wrap it in an uber-shader system (which is the layer normal artists will use).
J-S
Post a Comment