Search this blog

17 December, 2013

Mental note: shadowmap-space filters

A thought I often had (and chances are many people did and will) about shadows is that some processing in shadowmap space could help for a variety of effects. This goes from blurring ideas (variance shadow maps and variants) to the idea of augmenting shadowmaps (e.g. with distance to-nearest-occluder information).

I've always discarded these ideas though (in the back of my mind) because my experience with current-gen told me that often (cascaded) shadowmaps are bandwidth-bound. To a degree that even some caching schemes (rendering every other frame, or tiling a huge virtual shadowmap) fail because the cost of re-blitting the cache in the shadowmap can exceed the cost of re-rendering.
So you really don't want to do per-texel processing on them, and it's better instead to work in screenspace, either by splatting shadows in a deferred buffer and blurring, or by doing expensive PCF only in penumbra areas and so on (i.e. with min/max shadowmap mipmaps to compute trivial-in shadow and trivial-out shadow cases and branch).

It seems though that lately caching schemes are becoming practical (probably they are already for some games on current-gen, by no mean my experience on the matter in Space Marines can be representative of all graphic loads).
In these cases it seems logical to evaluate the possibility of moving more and more processing in shadowmap space. 

Then again, a reminder that a great metaheuristic for graphics is to try to reframe the same problem in a different space (screen, light, UV, local, world... pixel/vertex/object...)

Just a thought.

13 December, 2013

Never Again in Graphics: Unforgivable graphic curses.

Well known, zero cost things that still are ignored too often.

Do them. On -any- platform, even mobile.

Please.
  • Lack of self-occlusion. Pre-compute aperture cones on every mesh and bend the normalmap normals, change specular occlusion maps and roughness to fit the aperture cone. The only case where this doesn't apply is for animated models (i.e. characters), but even there baking in "t-pose" isn't silly (makes total sense for faces for example), maybe with some hand-authored adjustments.
  • Non-premultiplied alpha.
  • Wrong Alpha-key mipmaps computed via box (or regular image) filters.
  • Specular aliasing (i.e. not using Toksvig or similar methods).
  • Analytic, constant emission point/spot lights.
  • Halos around DOF filters. Weight your samples! Maybe only on low-end mobile, if you just do a blur and blend, it might be understandable that you can't access the depth buffer to compute the weights during the blur...
  • Cartoon-shading-like SSAO edges. Weight your samples! Even if for some reason you have to do SSAO over the final image (baaaad), at least color it, use some non-linear blending! Ah, and skew that f*cking SSAO "up", most light comes from sky or ceiling, skewing the filter upwards (shadows downwards) is more realistic than having them around objects. AND don't multiply it on top of the final shading! If you have to do so (because you don't have a full depth prepass) at least do some better blending than straight multiply!
  • 2D Water ripples on meshes. This is the poster child of all the effects that can be done, but not quite right. Either you can do something -well enough- or -do not do it-. Tone it down! Find alternatives. Look at reference footage!
  • Color channel clamping (after lighting), i.e. lack of tonemapping. Basic Reinhard is cheap, even on shaders on "current-gen" (if you're forced to output to a 8bit buffer... and don't care that alpha won't blend "right").
  • Simple depth-based fog. At least have a ground! And change the fog based on sun dot view. Even if it's constant per frame, computed on the CPU.
If you can think of more that should go in the list, use the comments section!

12 December, 2013

Shit people say: graphics have "peaked"

If you think that rendering has peaked, it's probably good. Probably it means you're not too old and haven't lived through the history of 3d graphics, where at every step people thought that it couldn't get better. Or you're too old and don't remember anymore...

Really, if I think of myself on my 486sx playing Tie Fighter back then, shit couldn't get any better. And I remember Rebel Assault, the first game I bought when I had my first CD-rom reader. And so on and on (and no, I didn't play only Star Wars games, but at the time LucasArts was among the companies made all must-buy titles... until the 360 I've always been a "computer" gamer, nowadays I play only on consoles).

But but but, these new consoles launched and people aren't that "wowed" right? That surely means something. We peaked, it happened.

I mean, surely it is not that when the 360 and later PS3 came out games weren't looking incredibly much better than what we had on ps2, right? (if you don't follow the links, you won't get the sarcasm...). And certainly, certainly the PS2 launch titles (was touted as more powerful than a SGI... remember?) it blew late PS1 titles right out of the water. I mean, it wasn't just more resolution.

Maybe it's lack of imagination. As I wrote, I was the same, many times as I player I failed to imagine how it could get better. To a degree I think it's because video-game graphics, like all forms of art, "speak" to the people of their time, first and foremost. Even if some art might be "timeless" that doesn't imply that its meaning remains constant over time, it's really a cultural, aesthetic matter which evolves over time.
Now I take a different route, which I encourage to try. Just go out, walk. See the world, the days, the nights. Maybe pick up a camera... How does it feel? To me, working to improve rendering, it's amazing. Amazing! I could spend hours walking around and looking in awe and envy at the world we can't yet quite capture in games.
Now think if you could -play- reality, tell stories in it. Wouldn't it be a quite powerful device? Won't it be the foundation for a great game?

Stephen Shore, one of the masters of American color photography

Let me be explicit though, I'm not saying that realism is the only way, in the end we want to evoke emotions, and that can be done in a variety of ways, I'm well aware. Sometimes it's better to illustrate and let the brain fill in the blanks, emotions are tricky. Take that incredible masterpiece that is Kentucky Route Zero which manages to use flat-shaded vector graphics and still feel more real than many "photo-realistic" games. 
It's truly a game that every rendering engineer (and every other person too) should play, to be reminded of what are the goals we are working for: pushing the right buttons in the brain and trick it to remember or replay emotions it experienced in the real world. 
Other examples you might be more accustomed to are Call of Duty (most of them) and Red Dead Redemption, two games that are (even if it's very questionable actually) not as technically accomplished as some of the competition but manage to evoke and atmosphere that most other titles don't even come close to.

At the end of the day, photo-realism is just a "shortcut", as if we have something that spits realistic images for every angle and every lighting, it's easier to focus on the art, the same way that it's cheaper to film a movie rather than hand paint every frame. It's a set of constraints, a way of reducing the parameters space from the extreme of painting literally every pixel every frame to more and more procedural models where we "automate" a lot of the visual output and allow creativity to operate on the variables left free to tuning (i.e. lighting, cinematography and so on). 
It is -a- set of constraints, not the -only- one. It's just a matter of familiarity, as we're trying to fool our brains into firing the right combinations of neurons, it makes some sense to start with something that is recognizable as real, as our lives and experiences are drawn from real world. But different arguments could be made (i.e. that abstraction helps this process of recollection), this would be the topic of a different discussion. If your artists are more comfortable working in different frameworks there is a case to be made for alternatives, but when even Pixar agrees that physics are a good infrastructure for productive creativity then you have a quite strong "proof" that it's indeed a good starting point.


Diminishing returns... It's nonsense. Not that it doesn't exist as a phenomenon, but we are still far from being there in terms of effort vs quality, and there are many ways to mitigate it in asset production as well (money vs content, which will then hopefully relate to money). 
As I said, everyday I come back home from the office, and every day (or so) I'm amazed at the world (I'm in Vancouver, it's pretty here) and how far we still have to go to simulate all this... No, it's not going to be VR the next step (Oculus is amazing, truly, even if I'm still skeptical about a thing you have to wear and for which we have no good controls), there is still a lot to do on a 2d screen, both in rendering algorithms and in pure processing power. 
Yes we need more polygons please. Yes we need more resolution. And then more power on top of that to be able to simulate physics, and free our artists from the shackles of needing to eyeball parameters and hand-painted maps and so on...

And I don't even buy the fact that rendering is "ahead" and other things "lag" behind. How do you even make the comparison?
AI is "behind" because people in games are not as smart as humans? Well, quite unfair to the field, I mean, trying to make something look like a photo, versus something behave like a human, seems to be a bit easier to me.
Maybe you could say that animation is behind because well, things look much worse in motion than they do when they are static. But, not only part of that is a rendering problem, but it just says exactly that, things in motion are "harder" than static things, it doesn't mean that "motion" lags behind as a field...
Maybe you can say we implemented more novel techniques in rendering than we did in other fields, animation didn't change that much over they years, rendering changed more. I'm not entirely sure it's true, and I'm not entirely sure it means that much anyways, but yes, maybe we had more investment or some games did, to be more precise.

Anyhow, we still suck. We are just now beginning to understand the basics of what colors are, of what materials are, how light works. Measure, capture, model... We're so ignorant still. Not to mention on the technical side. Pathetic. We don't even know what to do with most of the hardware yet (compute shaders? for what?).

There could be an argument that spending more money on rendering is not worth it - because spending them on something else now gets us more bang for the buck, which is a variation of the "rendering is ahead" reasoning that doesn't hinge on actually measuring what is ahead of what. I could consider that, but really the reason for it is just that it's harder to disprove. But on the other hand, it's also completely random! 
Did we measure this? That would be actually fascinating! Can we devise an experiment where we can turn a "rendering" know and an "animation" or "gameplay" know and see what are people most sensitive to? I doubt it, seriously, but it would be awesome.
Maybe we could do some market research and come up with metrics that say that people buy more games if they have better animation over rendering, but... I think rendering actually markets better (that's why companies name and promote their rendering engines, but not their animation ones).

Lastly, you could say, it's better to spend money somewhere else just because it seems that rendering is expensive and maybe the same money just pays so much more innovation somewhere else. Maybe. This still needs ways of measuring things that can't be measured, but really the thing is some people are scared that asset costs will still go up and up. Not really "rendering" costs, but "art" costs. Well -rendering- actually is the way to -lower- art costs. 
No rendering technique is good if it doesn't serve art better, and unfortunately even there we still suck... We are mostly making art the same way we always did, triangles, UVs, manually splitting objects, creating LODs, grouping objects and so on. It's really sad, and really another reason to be optimistic about how much still we have to do in the future.

Now, I don't want to sound like I'm saying, I'm a rendering guy, my field is more relevant and all the money should go to it. Not at all! And actually I'm passionate of a lot of things, animation for example is fascinating as well... and who knows, maybe down the line I'll do stuff that it's completely different than what I'm doing today... I'm just annoyed that people say thing that are not really based in facts (and as we're at it, let's also dispel the myth that hardware progress is slowing down...).

Cheers.

10 December, 2013

Never again: point lights

Distant, point, spotlight, am I right? Or maybe you can merge point and spot into an uberlight. No.
Have you ever actually seen a point-light in the real world? It's very rare, isn't it? Even bare-bulbs don't exactly project uniformly in the hemisphere...
If you're working with a baked-GI solution that might not affect you much, in the end you can start with a point, construct a light fixture around it and have GI take care of that. But even in the baked world you'll have analytic lights most often. In deferred, it's even worse. How many games show "blobs" of light due to points being placed around? Too many!
With directional and spots we can circumvent the issue somehow by adding "cookies", 2d projected textures. With points we could use cube textures, but in practice I've seen too many games not doing it (authoring also could be simpler than hand-painting cubes...)
During Fight Night (boxing game) one little feature we had was light from camera flashes, which was interesting as you could clearly see (for a fraction of a second) the pattern they made on the canvas (journalists are all around the ring) and there it was the first time I noticed how much point lights suck.
The solution was easy, really, I created a mix of a point and distant light, which gave a nice directional gradient to the flash without a cone shape of spots. You could think of the light as being a point and the "directional" part being a function that made the emission non constant on the hemisphere. 


It's a multiply-add. Do it. Now!

Minimum-effort "directional" point


Another little trick that I employed (which is quite different) is to "mix" point and directional in terms of the incoming light normal on the shaded point (biasing point normals towards a direction), at the time an attempt to create lights that were "area" somehow, softer than pure points. But that was really a hack...
Nowadays you might have heard of IES lights (see this and this for example), which are light emission profiles often measured by light manufacturers (which can be converted to cubemaps, by the way). 
I would argue -against- them. I mean sure, if you're going towards a cubemap based solution sure, have that as an option, but IES are really meaningful if you have to go out in the real world and buy a light to match a rendering you did of an architectural piece, if you are modeling fantasy worlds there is no reason to make your artists go through a catalog of light fixtures just to find something that looks interesting. What is the right IES for a pointlight inside a barrel set on fire?

A more complicated function

A good authoring tool would be imho just a freehand curve, that gets baked into a simple 1d texture (in realtime, please, let your artists experiment interactively), mapped with light direction dot (light position-shaded point).
If you want to be adventurous, you can take a tangent vector for the light and add a second dot product and lookup. And add the ability of coloring the light as well, a lot of lights have non-constant colors as well, go around and have a look (i.e. direct light vs light reflected out of the fixture or passing through semi-transparent material...).

1d lookups are actually -better- than a cubemap cookies, because if you see in real world example many fixtures generate very sharp discontinuities in the light output, which are harder (require much more resolution) to capture in a cubemap...
Exercise left for the reader: bake the light profile approximating a GI solution, automatically adapting it to the enviroment the light was "dropped" in...


08 December, 2013

Enhance this!

Don't you hate when people have strong critiques towards a thing, but it happens that it's just that they don't know enough about it? Well, I don't, because then I think of how many times in my youth (and let's say only then) I did the same...

Regardless, today I happen to have a bit of time and I saw yet another post laughing at how stupid the "image enhance" trick used in movies and TV series is, and so you get this nerdrage against nerdrage...

Now think a second about this. Who do you think it's right? C.S.I., which is a huge TV series using arguably some of the best writers and consultants, or the random dude on the net? Do you think they don't know how realistic any of the techniques they use is?

Do you think they don't actually and very carefully thread between real science and fiction to deliver a mix that is comprehensible and entertains their audience, telling a story while keeping it grounded in actual techniques used in the field? Don't you think -they- know better, and the result was very consciously constructed? 
Well, ok sometimes producers just don't care, they want to tell a story, not write a documentary, but more often than not that's not the case.

The same goes of course for anything, really, especially when something is successful, makes a lot of money, has a lot of money behind, you should always bias yourself towards being humble and assuming the professionals making said thing -know better-.


Now, back to the "image enhance" trick. It turns out it is real science. It's called "super-resolution" and it's a deep field with a lot (really, a lot!) of research and techniques behind it.
It's actually common nowadays as well, chances are that if your TV has some sort of SD2HD conversion, well that is super-resolution in action (and even more surprising are all the techniques that can reconstruct depth from a single image, which also ship in many TVs, the kind of models they came up with for that are crazy!).

The scenarios presented in movies are actually -quite- realistic even if the details are fictionalized. True, the interface to these programs won't look like that, maybe they won't be real-time and surely they won't be able to "zoom" in "hundreds" of times, but they surely can help and surely are used. 

That is to me a reasonable compromise between fiction and reality, as certainly you can and will use computers to get a legible nameplate for a video that is too low-resolution for the naked eye, or match an otherwise unreadable face against a database of suspects and so on, probably not in quite as glamorous and simple way as the movies show, but fundamentally the idea is sound (and I'm quite sure, used in the real world).

It is a non-realistic representation of a very realistic scenario, which is the best that good fiction should try to achieve, going further is silly. Or are you going to argue that a movie is crap because at night for example you can't really see as clearly as they show, or because they don't let a DNA test take weeks and an investigation several years?




When it comes to videos we can use techniques known as "multiple image" super-resolution, registering (aligning) multiple images (frames in this case, i.e. optical flow), and merging the results, which do work quite well. Also, most fictionalized super-resolution enhances focus on faces or nameplates, which are both much easier to super-resolve because we can "hint" the algorithm with a statistical model (a-priori) which helps tremendously to guide the "hallucination".
And even if hallucinating detail might not hold in a court (the stronger the a-priori model, the more it will generate plausible results but by no means always reliable), it might be very well be used as a hint to direct the investigations (I've never seen a case where it was used in courts, always to try to identify a potential suspect or a nameplate, both cases where having a strong probability, even if it's far from certainty, are realistic).

So, bottom line is, if you think these guys are "stuuuuupid", well then you might want to think twice. Here are some random-ish links (starting points... google scholar for references and so on if you're interested... I couldn't even find many of my favorite ones right now) to the science of super-resolution:
It would take many pages only to survey the general ideas in the field. Don't limit your imagination... Computer science is more amazing than you might think... We reconstruct environments from multiple cameras, or even sweeping video... can capture light in flight, we can read somebody's heartbeat from video, fucking use lasers to see around corners and yes, even take some hints about an environment from corneas...




And by the way, don't bitch about Gravity, try enjoy the narrative instead. You might live a happier life :)

05 December, 2013

Notes on Epic's area lights

Finally, PBR is becoming mainstream... But, it's not easy, especially if you want to stay "correct". Are you sure your pipeline has no errors? Do you know the errors you have? What is your ground truth? Acquired data? Path traced solutions?

I'm planning to share some of my notes on PBR methods and certain findings and thoughts, this is a starter, on Brian Karis' excellent "representative point" area lighting as explained in Siggraph's 2013 PBR course.

I won't say much about it actually, mostly because my own research is not cleared for publication yet. If I were you though, I would also keep an eye on GPU Pro 5, as Michal Drobot's publication might change the state of the art once more (and I'm not really teasing, to say the truth I haven't tried his method yet and compared it, but I think it's "better").

As a good PBR renderer (or any renderer really) should know what kind of errors it's committing, I implemented an area light integrator and used it to verify a few ideas, including Epic's:

Blue dots: ground truth. Meshed plot: Epic's
It turns out the representative point solution does not do a great job at "preserving the shape" of the underlying BRDF, and it (quite understandably) just "caps" it with a spherical arc. Also note that at more grazing angles the ground truth is quite different and its cap starts to have a slanted angle as well.

Normalization is also interesting, here I actually thought Epic's method would fare worse (as it seemed to be just an heuristic with not enough justification behind it), but it's actually quite close to ground truth. Always check...

The smaller, gray mesh is the ground truth, the non-gridded light blue surface is Brian's normalization for the representative point solution, the gridded light blue one is my own version.

It is possible to do better, with varying degrees of complexity. At a given point things start to be needlessly complicated so, when you start looking at data try not to lose track of what your artists want and what makes a perceivable difference.
Don't do my mistake of staying too long in Mathematica to find a perfect solution, without verifying in actual shaders that you've passed the point where it matters...

Arguably for example, for small lights, the "roughness modification" solution is actually better (don't you love when the old OpenGL fixed-pipeline model, which had a specular power modifier in the lights, is more right than people thought for years?), Brian notes in his writeup that for small lights that is indeed a good method, but you might want to think twice considering if you need big lights or just lights big enough to "correct" for a roughness factor that otherwise would end up wrong in the materials.

One of my own analytic area approximations
Some other food for thought:

  • How much does hemispherical shadowing (area light dipping under the normal hemisphere visibility) does matter?
  • Should more of your sky be represented analytically? Note that it is true that the sun itself has a small arc when seen from the earth, but with the scattering that happens in the sky a larger area could be represented as "area light". The advantages are that it would work better with the BRDF and have more "correct" shadowing... 

01 December, 2013

On Mantle

So it seems nobody so far had made a fool of himself being opinionated on AMD's recent Mantle API announcement. Allow me to fill that spot!


- Mantle could be a -great- idea! 

I think AMD nailed it this time... One of the biggest barriers that all kind of innovations face is adoption. For gaming hardware that's often due to a feedback loop, you need developers to be on board in order for the technology to be utilized but they won't care to invest money if the user base is not there, and the user base won't be there if developers don't commit to making product using a given technology.
That's the reason why for example you don't see most of the big players committing to exclusives early-on when a new console generation comes out, and why for example I still believe Oculus will have a tough time regardless of how amazing it is...

One way out of this is to be somewhat scalable, offering a technology that works without your proprietary stuff but works much better with. E.G. PhysX, the hardware acceleration board for physics in games was initially an extremely bad idea, they sold nothing and they deserved to sell less (I feel for people that wasted their money on that). Nowadays though it works, both because NVidia bought them and made the library to work on their much more popular GPUs, so lowering the barrier of entry, and because they offer a CPU fallback, so it's not exclusive to NVidia's. Great job, and I'd like them to follow a similar suit with GameWorks as well, by the way...

Now, it seems that AMD is not following the suit here, I guess it would be possible to make an emulation layer that fallbacks Mantle on DX11 (or well, parts of it), but they aren't.
Consider that next-gen consoles are already on AMD's hardware, and that really console game development is incomparably a bigger market than PC gaming (for most gamedevs), so most companies are more than willing to invest to use all kind of proprietary tech if it runs on consoles

So what AMD really should do (and I think they will) here is to find a way to leverage the position they have on consoles (which some analysts says it probably doesn't bring huge quantities of cash) to help their PC line as well. By keeping Mantle close at least to PS4's API (as I expect it to be... or anyhow providing a Mantle layer for PS4...) they are "recycling" an investment developers already have to make and lowering the barrier of entry. Also, developers already will have a DX11-ish renderer ready for XB1 (and I'll expect Microsoft to similarly keep in sync next versions of DX with the XBone to leverage a similar effect), thus allowing the "PC build" of the game to just be a merge of both technologies studios already have to employ: a XB1-ish renderer and a PS4-ish renderer.

- Why does DirectX 11 suck (not!)?

What are DirectX big issues? 


Well mostly they are all related with the fact that doesn't break the tradition of requiring delayed work on draws, requiring the driver to do extra work to keep track of things around and then pull them in to submit to the GPU...

The hardware state is not (and won't ever be in an abstraction API) one to one with the DirectX bits of state, so the driver has little choice but to store what DirectX tells to set in the hardware, wait until a draw appears and then look at -all- the state it stored and translate it into the appropriate bits and pieces that go in the hardware. 
Another way the driver needs to do extra work is that state lifetime is all "wrong" too, certain objects in DirectX can be just updated with new information but that's not possible for a driver as it has to keep the old ones around until the GPU has finished processing them, so you need to make duplicates, patch things and all kind of horrors.
The final nail on the coffin is how object updates are seen across threads (deferred context). Unfortunately, the DX11 model allows certain objects (i.e. dynamic buffers) to be seen across threads, and the final order of updates will depend on the order the deferred contexts are invoked from the immediate device, forcing the driver to leave holes in the contexts (or not generating a real GPU command buffer at all!) for later patching. This has to happen also for a few other objects, really.

Does this make of DirectX a bad API? I'd argue, it makes it less good than it could have been, surely. But it's still fairly ok, and I suspect that most of the issues could be sidestepped...

Yes, ideally, state could have been grouped in bigger entities (ideally a big block for all HW state but memory buffers, like textures, vertices, shaders and constants), if a single "set" command sets a lot of state chances are the driver will have more often all the information needed to do some work. Buffers could have been defaulted to "discard" and do we really need refcounting this day and age? Persistent state (having a state machine) is also quite bad (OpenGL doesn't fare better in that regard)... Many other little adjustments could have been made, remember though there is also a balance with ease of programmability (less of an issue for gamedevs maybe, but DirectX is not only for games) and compatibility with legacy hardware an so on...

- A frigging hundred thousand of drawcalls. And do I care?

Why is Mantle cool for developers? Assuming that it's not a big investment, so it's approachable, why should we care? Well, one of the biggest bullet-points AMD so far has been putting out there is that the API being faster, slimmer and more low-level will allow for much more drawcalls, more unique objects on screen.

I think this is mostly marketing (I will be proved wrong...), and it's unfortunate that AMD didn't yet publish any real numbers of real games doing really better for real. I think it's because for the most, they won't.
Now, it is true that there are situations where PC games are CPU and driver-bound, if you have an extremely powerful GPU that might quite often be the case, also there are certain games that are really optimized to generate a LOT of draws, e.g. forward-rendering engines that rely on splitting geometry at light and shader "intersections", or engines that rely on multipass rendering... But still, it's nice to have a speedup in these circumstances, but let's be honest, will the industry really jump on the idea or rather let (AMD) PCs to be slightly-less-than-optimal and circumvent that with their extra power (as it always has been)?

To answer that I think you should really look at DirectX 11 and the amount of effort the industry did put into that. DirectX 11 came out in 2008, almost six years ago now! How many games have shipped on it? How much better where they compared to DirectX 9? Are there games that are still faster on 9 than 11? What about tools? How much the industry cared? How much did we use Compute shaders? Geometry shaders? Tessellation? 
Truth is you will start seeing just now better DirectX11 games because now consoles are on DX11-ish hardware. Before, nobody did really care much, and once your assets are made to work on consoles there is only little you can do to "dress them up" with extra features... Everybody remembers Crysis 2 tessellation issues, but almost all games did the same things, just minor cosmetic dressing in their DirectX11 versions when they cared to put out one. If you have actually worked on DirectX11 you will have found that a big issue is still on driver bugs and various things that don't work as they should...

Really, the API is not perfect. But mostly it's that nobody, not the gamedevs nor the GPU vendors did really invest a lot. I would argue that the performance issues for example are as much a fault of the API as they are a fault of the drivers. It could go faster, even right now, in the end even in the golden era of PC graphics we always worked with the GPU vendors to "carve" certain hot-paths where the drivers were fast, a sort of contract on a language, basically an API inside the API. And I don't believe (I might be wrong) that DX11 doesn't allow for a proper multi-threaded driver (or more than it is right now anyways) with some restrictions on how we use it (a "fast path" that falls-back if you do the wrong calls). I think it's that having such things is quite hard and won't make anybody rich, and thus nobody really invested a lot of money in them...

So where does all this leave us? Well, it turns out that we not only are "good" at writing engines that work in the thousands of draws per frame limit but I suspect we still will need to, because not all the platforms will be able to do hundred of thousands. And if not all the platform will, it means that we still will need to think about art assets and graphic techniques in a way that they work on thousands of draws. Once you have a world that works like that you're set, you won't really be able to use that ability of drawing hundreds of thousands. 

Nowadays in any game you have, even if you bump the draw distance all the way you still won't generate so much draws. It would be cool probably to think about a new generation of engines that structurally work with an hundred-of-thousands draws assumption, I think it could very well change the way we think about culling, instancing, figure new ways. 
But I don't think it will happen, I suspect it will remain a marketing thing, and games will go better just because they can be a bit faster on certain hardware configurations at doing the draws they do.
At best, we'll have more particles or such dressing-style effects, but I can't really imagine now an application because we're good at doing these things with very few draws already. Anyhow, it will be hard to fully employ any ability that requires to think about assets in different ways, if said ability doesn't work everywhere...

- Mantle and NVidia? Mantle and EA? Mantle and Steam Machines?

Even if AMD says the opposite, I don't think Mantle will be a cross-platform API. Well, I don't even think it will be a cross-generation API. It could be, because really it's not hard to imagine how a more modern API should work (I bet DX12 will solve most DX11 issues, and OpenGL via certain extensions is already getting there, bindless, multidrawindirect...), but I don't think it even should...
As I wrote at the beginning, it would be the best if Mantle was as close to PS4 as possible, lowering the investment needed for gamedevs. Even if it works only on the GCN hardware and the GCN will be the architecture they use more or less for this entire console generation, that would be plenty. No gaming-oriented graphics API (and probably no graphics API in general) has really to think with longer timeframes, technology changes anyways.

If Mantle is close to PS4 then probably it won't map perfectly to NVidia's hardware, but I don't think it's hard to believe that the API could run and maybe even well on NVidia's GPUs, there are ways to abstract things just well enough. But I don't think NVidia will join the program, developers will surely be happy if that was the case, but, politics... Also if AMD really wanted to make an "open" API well. they surely shouldn't have been doing it behind closed doors with EA|Dice and nobody else...

Speaking of which... What's in for EA? When I first heard about Mantle I was puzzled and in a way I still am. For the reasons I sketched above I don't think that EA is going to directly make more money from it. I don't think it will be as revolutionary as it could sound to begin with, I see it as an optimization but I will be surprised to see significant graphical features to be locked to Mantle-only. And even if such things existed, we're talking about an influence on less than half of the PC market, that per se is significant only for very few EA games to begin with (only Battlefield?).

I don't really see them selling more copies because of Mantle, to a degree that I even suspected that AMD might have just paid to get Dice on board, or did most of the work themselves for the porting. On the other hand I wouldn't discard the sheer passion of Dice's team for graphics and technology. And I can't really get a feeling of how much Frostbite (and Ignite) as brands per se help marketing EA's games, to a degree where being on top of whatever graphical innovation there is directly strengthens that brand and skews people into thinking that everything Frostbite will be a must-buy...

About Steam Machines, I don't know really. I don't know how Steam Machines will succeed to begin with, even less Mantle on them. They are PCs. They cost in the ballpark of how a PC would (from what we know so far) with a similar hardware, just without Windows, where still the vast majority of games are. And I don't think Valve can really do publish their games on SteamOS only (HL3...) as certain people say, for the same reasons no huge title lands on new consoles at launch. Too small of a market. 
Even if SteamOS can just be installed alongside Windows on the PC you already might have, it would simply hurt the sales of the game, piss the Windows Steam users which are the "core" of Valve's market and be a crazy move in every way. Yes, Mantle could run on Linux I guess even easily. But it won't help. You would still need to cover OpenGL for NVidia's hardware anyways, so it doesn't really lower the cost of entry. Unless it gets so crazy popular that some studios will be willing to do Mantle exclusives... Hard to imagine.

- Tl;Dr - Conclusions and expectations

Mantle could be great move for AMD. I hope it will be very easy to port from PS4, if so, we will see titles using it as a performance improvement. That's the minimum expectation, AMD hardware gets a framerate boost on some or many titles depending on how easy it is to port. Both low-end configurations and very high-end (where a single-threaded CPU driver might likely stall the GPU) might get advantage.

I expect also some savings on the GPU side, not only CPU, especially if they expose better ways to control scheduling of draws and compute on the GPU, aiding efficiency, but also from other things like being able to avoid certain operations all-together. Actually I would say there is a lot on compute that is not exposed today by DirectX, being able to schedule that better might be a bigger win than the mostly-marketing stuff that the 100k draws per-frame I think is...

If they expose more of certain GPU details which are not accessible under DirectX/OpenGL, there might even be certain effects that are available only on Mantle. We might see better streaming and texture usage thus enabling nicer textures, so getting some better looks from there instead of just better framerates. Stuff like 3d rendering (rendering the scene twice) might benefit more from Mantle, as they could all kind of algorithms that need to submit the scene to different buffers (think for example more shadowmapped lights etc...).
Overall though my "best" scenario is that it might enable "minor" cosmetic additions that are hard or too expensive to do without it. The kind of things you saw in DirectX10/11 versions of DirectX9 games. I doubt games will look significantly different, I doubt assets will be made exclusively for it. I doubt NVidia will jump on board.

Mantle is interesting also because it will open the "lower lever" layer to researchers and categories which don't usually work on consoles, some good stuff can come out of that, stuff that I can't foresee today and might change the landscape...

But most importantly, it will -surely- tell NVidia and Microsoft to "get their shit together" (having Frostbite on Mantle is enough already to call it a success and make these companies worry, imho). I think Mantle could be cross-platform, I don't think it will be (NVidia won't make it)... which will lead to both a better DirectX (which Microsoft will likely leverage also to make XB1 better) and better drivers... If they feel threatened and they care, put money on that, they might even succeed at making Mantle "obsolete" (less attractive) faster that it will spread... We'll see...

03 November, 2013

You have failed

This appears also on AltDevBlogADay.

Today i was watching Mike Acton's talk at SIEGE 2013 on leadership, and it prompted me to stop the article I was working on to start drafting this. I recommend watching his talk, it's quite good and it talks about the key to leadership: responsibility.

Indeed, leading is about accepting responsibility, it's different from management and its methodologies, and I don't think really it applies only to the people who we identify as "leads". It doesn't come with a tag, really, leadership is a quality that is valuable regardless of your position and most of the good traits of leadership are the same, only the scope, or sphere of influence changes with your job. And that is exactly because leadership it is about responsibility, which is an universal value, even outside the workplace really. 

It's maybe even a pet peeve of mine, I hate when we (and we all do to a degree) think about circumstances or the faults of others, without first thinking about what we did or what we can do. Now, among all that responsibility entails there is something that people shy from talking, something very fundamental that we have to discuss, and the reason I'm writing this. Failure.

If you never failed, you didn't try hard enough really, right? To a degree I think we all agree that failure is important, it is a metric of how much you push yourself out your comfort zone but it isn't in any mean a positive thing, I won't make some hippy case for the contrary. We all want to be successful, we want our programs to work, our games to sell, our research to innovate and so on. 

Success is good, failure is bad... but on the other hand, we don't just sweep under the rug our failures, right? Failures are problems, problems... well that's something we can work on, it is information, it is learning, it is part of our job, it is part of being responsible.

I find it hard not to be defensive, we instinctively are I think, surely I can be and it requires applying quite some thought and attention to detect these instances in oneself. Even what I just wrote is an example of it, I changed into "I can be" my original "I am" because writing something negative about yourself seems to trigger some internal alarms.

Have you ever experienced a studio head coming in and saying words along the lines of "we didn't do well" and "things didn't work out as we expected" so we have to do some crunch and overtime maybe even throwing in a few hints at how that's kind-of normal in our line of business anyways? Would you not have preferred someone saying I was wrong, I approved these decisions that didn't pan out, now we have to ship and I think this is the best course of action, and of course if you want to talk about alternatives come and we will figure things out?

If that's something that you agree and experienced, then be responsible, and apply the same lesson to yourself as well... Wouldn't it be a better world if we knew for example all the interesting ways a given technique fails, not only the ways it succeeds? Why we can't be open about failures? Honesty leads to facing issues in a positive way, it leads to trust, it is a remarkable value. Managing failure is part of leadership, educating about it is part of leadership, and certainly the more influence on the studio culture you have (or should have) the more you're responsible for these aspects, but really it starts with everyone, leader or not. It's a good skill to learn.

Let go of defensive instincts, they won't make anything better.

01 November, 2013

Battlefield 4 Review (graphics)

UPDATE: I see this have been picked up by some gaming forums. All fine if we take this not -too- seriously, the disclaimer below applies, these are some limited considerations and thoughts I had by putting few hours in the game while waiting for other stuff to finish and so on. I stand to the fact that some things are interesting to think about (beware, might even be technically wrong when I say they do this, could do that, I didn't use any hack to reverse-engineer the game) for people who make games. I see people saying "it's like digital foundry on steroids". No, DF spends weeks to do a really amazing job reversing what they can reverse accurately. I spent little time and had some unsubstantiated rendering thoughts, it's at best different. Anyhow, if it doesn't end up in a flamewar that lets me take this down, I might do it again for other games or publish other stuff I did in the past and kept private. Maybe even do it seriously next time.

So... This ain't gonna buy me any friends I guess. On the other hand I have to say I would be thrilled to see people tell me even the harshest things about my work, I've learned from a great artist who once told me to seek for people that would tear my drawings to pieces. Not that there is anything to tear in DICE's excellent game, just to say, please do dissect my work :)

Also, these are just some things that I've noticed in a few hours of single-player campaign, on my PC at Ultra. It's not comprehensive. It's biased by whatever happens in the first few hours of SP, by my mood the day I played in many other ways. It's not baked by Pix captures nor by any special knowledge, so it's probably ALL WRONG I didn't take enough time to "reverse" anything.
I routinely survey games and their graphics, I consider it part of my job, but often I don't have much time for that. Worse still when the game is good and I end up actually you know, playing it, instead of just looking at rendering tech :)

I hope screenshots will survive blogspot's compression, also notice that most of them are downsized so don't pixel-peel, most images are equivalent to a supersampling AA version...

Ok, so. Let's go.

Frostbite is a great engine and I'm actually thrilled to see what all the various EA studios come up with it, truly can't wait. So as you will imagine and as everybody already will tell you, there is a lot of good stuff... That's why I'll start instead with three things that I think are -wrong-, then move to other observations:

1) J.J.Abrams actually doesn't want his lens flares back
The good: they work well, they are stable (seem even to fade behind occlusions), they are a mix of techniques I guess screenspace, art-authored particles and framebuffer readback to spawn more particles. They look very similar to Crytek ones, and they truly "blind" you.
The bad: they are fucking everywhere! BF3 did this, Crysis3 did this, please let this not spread to other games! It's a shame because they work well, and there are situations where you are blinded by lights that these could really help shape (even if they are cinematic flares, they don't try to replicate what happens with eyes), but they are always turned all the way up all the time and after a bit you'll want to rip your eyes out. It's a form of torture and a huge artistic sin.


2) Everything has specular. In your face! (a.k.a. Rise and Shine)
Specular reflections seem to be turned always (well, very often) to 11. Now, while there are some situations where the intensity of it probably wouldn't be far off (i.e. really wet environments, pouring rain), we can't do perfect reflections yet.
The good: DICE guys being smart as they are do a number of interesting things, there are cubemap reflections but I think these are augmented with reflected "cards" or simple proxy geometry, I guess the latter only for planar reflections (rendered in a prepass, mirrored) but there's more to it, I think I've seen cards "fade" in and out and sometimes I think I've seen faint artifacts from a screenspace reflection method... Not sure, warrants more investigation
The bad: Specular aliasing everywhere, all forms of it (geometry, normalmaps, planar reflections), and I played on PC at very high-res, MSAA and post-AA filters.
From what I can see, analytic lights suffer mostly because textures even when looked up close have many discontinuities with the specular, I know that certain blending tricks help giving you detail, but seems overdone. Quite surprisingly as well, as we know by now many ways to circumvent texture/shader aliasing.


For planar reflections where aliasing is most offensive honestly it almost seems like if they did blur a bit the "cards" buffer (maybe I'm wrong and they don't have one...) it would solve a lot of issues. Still the effect should be applied sparingly, really in CG if you can't do it well, don't do it, sweep it under the rug. Planar reflections are a hack, they work only on some surfaces and this alone is an issue. Plus we can't really occlude too well sharp specular reflections, and occlusion is the key to believable lighting. In some levels, I just wish I had a multiplier I could tune down globally...
Lastly, specular seems always monochromatic, maybe I'm wrong, in real life it's often so, but I remember thinking for some materials to be wrong, could have been art or maybe to save on deferred GBuffer space...


3) Faces
This is the last thing I'll really bitch about. Characters aren't bad, animations are good too, but the shading is off, and again this is quite a surprise. Sometimes faces remind me of L.A.Noire weird low-frequency normalmaps. I wonder if that is indeed because of similar compression of acquired data, DICE has the tech and they used it in previous games... Anyhow, you can still blend that with detail maps driven by skin stretching or so, it's quite "common" tech nowadays. Also, specular. No, this time, the lack of it, which further causes the detail to be quite lacking, if only they had that they would be I think much better, as in general the SSS-ish effects are not bad and tastefully kept "in check" not going into "wax" looks. Looking at the ear edges, it seems a screenspace filter of some sort for SSS, but honestly it could be as well pre-integrated SSS. I don't love the over-bleeding in certain facial expression (normalmap wrinkles), eyes and lips are all "wrong" too. Now, mind you, especially in a deferred renderer doing skin, which is a fairly special material, is hard, but the lack of specular and detail is a mystery.


Texture detail
Now on some of the truly great stuff. On PC, details are amazing, especially textures and particles. Aliasing aside, materials are impressive, even more than Crysis 3 where everything had detail but mostly due to tiled detail textures used everywhere, especially I think to modify specular and give materials an unique microdetail. Here, I couldn't really see tiling, which means either they use very big textures or they do tiled details with some sort of distortion/blending tricks to hide it, or I'm not good at this :) Also material variation on the surfaces is great, you can't really see decals or layers, if they are doing them (which I'm sure they are) everything blends very, very well.


Geometric detail
Geometric detail is mostly due to having a lot of objects :) They don't seem to be doing tessellation, at least for displacement mapping, at all (which is not a bad choice), and I'm not sure if some surfaces do POM or not, honestly I'm not good at spotting that (especially certain techniques that don't simulate reliefs very well can be subtle).
Debris is everywhere, both authored in the level and due to destruction and particles. It's really great, things fly around all the time and it doesn't look unnatural. Also, no shimmer, no aliasing, small particles seem to be pre-blurred when depth of field is on (I think). It's actually easier (even if might not matter in a deferred, non baked renderer) to light correctly small instanced objects that large ones.
I couldn't see any particular shading trick applied to the vegetation (but I didn't look very hard, the levels I've played weren't very lush) but one thing it does great is that grass always bends out of the way and it's really hard to "clip" into it.

Destruction is everywhere and it seems mostly precomputed. At least in the campaign, some objects always shatter in the same way, while others shatter progressively, and other events seem scripted, like some cars always explode with a granade and some other never. All in all, it works great. Also the fact that there is always something that moves, cloth, paper, dust, foliage and so on really helps to sell the world as “living”, it’s really a perfect“touch”.
Lastly, I couldn't really see LODs crossfading or dissolving, small objects stay around long enough you won't notice they were gone, but that's also expected on PC on Ultra, I should try consoles...


Lighting
Pure deferred has its pros and cons, of course it's hard to bake much when things are always shattering and changing. Overall business as usual, does a good job with many lights on screen, and the analytic BRDF used seems quite "physically based".
They seem to use sparingly SSAO (Nvidia's HBAO I guess at ultra), really just a touch and with quite a huge radius, so you won't see "cartoon shading" silhouettes. It seems almost not randomized at all so you get sometimes the "stadium lights" effect (which I prefer to low-freq noise of some randomized AOs), if you look closely though sometimes there seems to be a 2x2 pattern that survives blurring. Blurring is detectable by the halos sometimes you get. There is a certain trade-off between large radius of occlusion and artifacts around characters, legs and so on, but most of the times it's not detectable so, good work there.
Honestly is great that we don't see SSAO-horrors like on Far Cry 3 or worse Deus Ex human revolution, but I wish sometimes it was used more, for "arealight" contact shadow kind of effects (bias it towards the sky! don't do radial SSAOs) and to shadows lights that are not dynamically shadowed, I wonder if they encode directional occlusion at all.


As far as scene lighting goes, I couldn't really see any dynamic GI going on (e.g. in the prison scene where there are large floodlights rotating around) and sometimes, especially in interior scenes it kind-of suffers from "deferred flatness", which is also a product I think of not having enough specular occlusion (i.e. on cubemap specular). If you can I'd say, always bake a good occlusion term, possibly directional, offline, or really invest in great directional SSAO or other methods, occlusion is fundamental.
Sometimes, rarely I have to say, things fail more spectacularly than others and you can see a lot of environment/ambient lighting going wrong. This of course is not aided by the fact that often scenes are so shiny...


There appear to be linear (tube) lights, and they seem to have no specular (but might be that it was just an artistic choice), other than that it seems we still have point, spots and directional sun, which is a shame. I think any deferred renderer nowadays has to invest in more "exotic" lights, lights always come with some sort of "shaping" device and in real world you won't easily see a perfect "spot" with a perfect falloff, things are weird, broken, spill, focus and so on. Also, it's really hard to fake ambient lighting with points.
Sometimes there seems to be "scattering", but I think it's mostly due to either placed flares or tuning up the bloom to a very large radius (I might be totally wrong). Both methods work well, but it's not the lovely scatter The Order 1886 is showing us, especially the idea of using bloom means also that sometimes the light is very softly spilled indoors, but overall again, well enough. God rays from the sun are also well done. Sometimes it's possible to go through a door and see the fog on the other side disappear, probably it's due to these settings having "volumes" of tuning, hard to say and to spot. Underwater adds DOF and grain.


Non occluded lights
This I want to remark. If you don't have a source of occlusion for a given light or BRDF piece, prefer not have that part at all (or be subtle). It is amazing how much difference it makes, I already wrote it, I'll do it again, occlusion is fundamental. Specular occlusion is fundamental. At least around silhouette edges, just "cast a ray".


Other stuff
I think it might have "thin wire" AA of sorts on rods and small branches and so on. Not sure if it's there or I just want it to be there because I really think is a good idea. Seems though strange that a lot of rods don't shimmer much and often become exactly pixel sized. I don't know, I disabled AA, changed resolutions, still not sure. If it's there, it doesn't fade-to-alpha, so what I would do is to increase the diameter or wires to keep them pixel-sized until they're far enough they can quicky fade into not-existing.
Sky sometimes seems to be "tacked on" and too low-res. In most games sky seems fake. I'm not yet entirely sure why.
Smoke. Sometimes it seems almost to be accumulated/blurred in a separate buffer and then composited on top, I didn't spot any particularly fancy volumetric lighting either. It warrants more investigation. On ultra, I couldn't detect any artifact from subsampling particles, I guess that's not done or if it is, it's done very well (which is hard).


Water. On average great, worse if higher waves/interaction with objects. Sometimes just fucking AMAZING.


DOF blur is smart, I almost never see it "before the focal plane", which is ok, that is harder to do, and as I wrote, better to hide an effect than show artifacts. On ultra it's "sprite DOF" so I guess it uses compute and append buffers to create lists of particles. Which makes it surprising that the artists chose a "catadioptric lens" kind-of bokeh shape, which would be nice in "sampling" kind of DOFs (as you sample around a circle, not inside a disc) but seems unnecessary here. I guess it's there to make a "statement", kind of like the lens flares... We'll grow past these effects and start using them with taste as now we see done with SSAO (more often). On lower settings it goes towards a simple "blur based" DOF which unfortunately bleeds quite a bit :/ Motion blur, pretty standard stuff, doesn't bleed out of silhouettes it seems so nothing particularly fancy.

Final score: great!
Sometimes really unbelievably good. I wish on PC it had a supersampling AA mode, as with proper supersampling some scenes are really amazing. It does: resolution scale > 100%