Search this blog

27 February, 2015

Why the rendering in The Order 1886 rocks.

Premise: Initially I thought I would post an "analysis" similar to the one I did on Battlefield a while ago, just my personal notes and screenshots taken as I played the game, isolated from any outside source of information (discussions between coworkers and other people, which I bet are happening everywhere right now across the industry). 
In the end I took an even more "high level" approach, so these notes won't really talk about techniques and speculations (or at least, that's not the main point of view).

This is also because I am quite persuaded nowadays that being focused on what matters in an image, being really anal about image quality (correctness, perception) matters more than the specific techniques.

Certainly it matters more than "blindly" implementing a checklist of cool technology, better do less and even remove features, but be really conscious about what makes an image and why, rather than trying to add more and more fancy things without understanding.

Also, in the following I'll often say "you can/can't notice...". I have to specify, because it matters, that I mean to analyze things that you can notice during mostly "normal" gameplay (in a dark room, with a good projector on a sizeable screen), albeit undertaken by a rendering nerd. 
Which is different from the work of actually trying to pixel-peep and reverse and break everything on purpose (not that it's easy anyhow in this game...). Which is -still- different from the (very hard) work needed to find all mistakes, as many are don't register rationally, but still are perceptually important.

1) Image stability. Technology that doesn't "show".

Everybody is raving about The Order's antialiasing, and rightfully so. We know it's some sort of 4x MSAA (nowadays that still leaves open a lot of variables on how it's implemented...) and that certainly helps a lot.
But it's not just 4x MSAA, we've seen many titles with MSAA, and you can even crank it on PC titles, yet I'd say nothing before came close to the "offline" rendering feeling The Order exhibits.

The air ship level is particularly impressive. Thin wire AA? Neat use of alpha-test?
And I think that's not just supersampling, but it's paying attention to pixel quality overall. Supersample what can be supersampled, to the extent it can be, filter out the rest, or move it into noise. It's not just antialiasing, it's the whole post-effect pipeline that works together (if you pixel-peek you can actually notice how the motion blur sometimes actively helps killing a tiny bit of leftover specular shimmering). 

Do less (in this case, -show- less pixel details) but better (more stable).

Some of the blurs can hinder a bit gameplay, but the image quality is undeniable.
And while I'm not personally against screen-space effects, in The Order they are noticeable for their absence. I spent some time trying to see if they had SSAO, SS-reflections and so on, and I didn't see anything.
And then you realize, all these techniques, at least as they have been implemented so far, can be spotted, they are not stable, they have telling artifacts.

If a rendering technique "shows", it's already a sign that there's something wrong (e.g. you can say this image has limited depth of field, but if you can spot, this image uses a separable blur, then that's already a problem).

The Order is remarkable even in that regard, a very technical, trained eye might form some educated guesses, but very vaguely I'd say it's really hard to pinpoint most of the specific techniques.

2) Occlusions. Lighting without leaking.

I always say, better to have missing lights that light where there shouldn't be.


Even in photography it's easier to see added light than "removed".
Even if you can't spill due to lack of shadowing...
Behind the scenes from Peter Lindbergh and Gregory Crewdson.
Even the unfortunate concession we sometimes do to gameplay, of adding "character" lights to better separate them from the background, visually can be quite "disturbing", but there it is by design - it's supposed to show.

Dark Souls 2: not the first, nor the last game to highlight characters.
Don't add an unshadowed light if it's noticeable. And it will -always- be noticeable if it leaks behind surfaces.
While for example a "hair light" on a character in a cinematic can be quite hard to register, a ceiling light shining under a table "disconnects" surfaces and kills realism.

Nowadays occlusion is becoming "easier" with the ability of rendering more shadowmaps (albeit people still complain about the intricacies of shadowmapping) and with more memory many things can be cached as well.
Static, diffuse indirect global illumination is also not a huge deal (e.g. lightmaps and probes).

But specular will kill you. It's quite an hard problem. Bright highlights shining around object silhouettes, behind walls and so on, very tricky to occlude with ambient occlusion and such non-strongly directional methods exactly due to their intensity.

If you've ever played with screen-space reflections you might have noticed, more than solving the problem of missing reflections, they are useful because they effectively capture the right occlusion (together with reflection) of objects in contact with surfaces (which won't be captured with a cubemap probe, even after parallax correction as a box).

This recent "Paris" scene, done in Unreal, albeit very nice, clearly shows
how specular is hard and screenspace is not enough: see it in motion.
Again, The Order doesn't reveal its hand, whatever it uses, it just works.

The importance of specular leaks, together with the fact that it's better to over-occlude than to leak, is always why I think if you can't do anything more, at least baking bent normals (even to the point of having only bent normals, without carrying two sets) pays off.

3) Atmospherics. Shading the space in-between surfaces.

We always had some atmospherics. Fog, "ground" fog, "god" rays... 

What The Order does there is quite interesting though. London is foggy, and the air is not some kind of afterthought special effect. It's a protagonist in the shading of the image.

It's a recurring theme by now, but again, you can't really "see" it as a special effect or as a specific technique. It's a meaningful contribution to the image that it's much more subtle and harder to capture.


The surprising thing to me is how actually I couldn't really notice dynamic occlusion effects in the fog (volumetric shadows or god rays). Rather what surprises is not such effects but the fact that -every- light scatters, everything subtly changes the color of the fog.

And again the lack of artifacts. You can't see voxels, you can't see leaks, you can't notice particles rotating and fading in and out. Marvellous.

4) Materials. Attention to detail.

To each technique its own abuses. First was bloom. Then lens flares. Then depth of field and so on...
There's always a new "must have" feature that gets cranked to eleven to be "cool" and hip until everyone gets disgusted and dials back, just to latch to something else. 

Stop.
For PBR it seems metals can be problematic.  More than one game I've seen nowadays that pushes shiny, perfectly polished stuff everywhere, I can't really understand why.

Dragon Age Origins: a wonderful game done with a wonderful engine.
But it really has some issues with shiny metals, the Orlais palace is one of the worst offenders.
The Order instead doesn't ever lose its composure. Materials are never constant, are never flat, they are varied and realistic and realistically blend into each other. Even small details, light lightbulbs and refraction, or the sheen of the ink on printed paper, is great.
Texture resolution is constant, you never notice issues between textures of nearby objects with different densities.

Physically based rendering, done mindfully.

Bonus round: baking.

Many games, especially around the period of time when deferred got all the hype, lost many of these points (made them worse) by sacrificing baked solutions to real-time computations, often in a misguided attempt of solving authoring problems (which should be the domain of better -tools-, not runtime changes).

Real-time solutions are still useful when gameplay requires dynamic scenes and updates (and we can't stream these!), but if you need realtime feedback on GI, write a realtime path tracer for your lightmaps...

Bonus round: F+.

The Order 1886 is famously based on a forward+ lighting engine. I actually wonder how much different it would have been on a old-school "splitting" (static light assignment, to geometry) forward renderer with generous amounts of baking of "secondary" lights. Don't know.

Bonus round: Next-gen and PC hardware.

GPU power won't matter until we can specifically target it.
The Order shows that, and that's why a PS4 title can look better than other games which were still targeting the previous console generation and then where upgraded with a few extra techniques to push high-end PC GPUs.

Pushing certain marginal stuff to eleven doesn't make quite as much of a quality difference than creating assets and effects specifically for more powerful GPUs, and that's why even if PCs were already more powerful than these consoles at launch, we have to wait for console games to improve in order to see real advances in games graphics.

24 January, 2015

Notes on G-Buffer normal encodings

Tonight on Masterchef, we'll try to cook a G-Buffer encoding that is:
  • Fast when implemented in a shader
  • As compact as possible
  • Makes sense under linear interpolation (hardware "blendable", for pixel-shader based decals)
    • So we assume no pixel sync custom blending. On a fixed hardware like console's GPUs it would be probably possible to sort decals to not overlap often in screen-space and add waits so read-write of the same buffer never generates data races, but it's not easy.
  • As stable as possible, and secondarily as precise as possible.
Normal encodings are a well studied topic, both in computer science and due to their relation with sphere unwrapping and cartographic projections of the globe. Each of the aforementioned objectives, taken singularly, is quite simple to achieve.

Nothing is faster than keeping normals in their natural representations, as a three component vector, and this representation is also the one that makes most sense under linear interpolation (renormalizing afterwards of course). 
Unfortunately storing three components means we're wasting a lot of bits, as most bit combinations will yield something that is not a normal vector, so most encodings are unused. In fact we're using just a thin surface of a sphere of valid encodings inside a cubic space of minus one to one components (in 32bit floating point we'll get about 51bits worth of normals out of the 3x32=96bits storage).

If we wanted to go as compact as possible, Crytek's "best fit" normals are one of the best possible representations, together with the octahedral projection (which has a faster encoding and almost the same decoding cost).
Best fit normals aren't the absolute optimal, as they chose the closest representation among the directions contained in the encoding cube space, so they are still constrained to a fixed set of directions, but they are quite great and easy to implement in a shader.
Unfortunately these schemes aren't "blendable" at all. Octahedral normals have discontinuities on half of the normal hemisphere which won't allow blending even if we considered the encoding square to wrap around in a toroidal topology. Best fit normals are all of different lengths, even for very close directions (that's they key for not wasting encode space), so really there is no continuity at all.

Finally, stability. That usually comes from world-space normal encodings, on the account that most games have a varying view more often than they have moving objects with a static view. 
World-space two-component encodings can't be hardware "blendable" though, as they will always have a discontinuity on the normal sphere in order to unwrap it.
Object-space and tangent-space encodings are hard because we'll have to store these spaces in the g-buffer, which ends up taking encode space
.

View-space can allow blending of two-component encodings by "hiding" the discontinuity in the back-faces of objects so we don't "see" it, but in practice with less than 12 bits per component you end up seeing wobbling on very smooth specular objects as not only normals "snap" into their next representable position as we strafe the camera, but there is also a "fighting" of the precise view-to-world transform we apply during decoding with the quantized normal representation. As we don't have 12bit frame-buffer formats, we'll need to use two 16-bit components, which is not very compact.

So you get quite a puzzle to solve. In the following I'll sketch two possible recipes to try to alleviate these problems.

- Improving view-space projection.

We don't have a twelve bits format. But we do have a 10-10-10-2 format and a 11-11-10 one. The eleven bits floats don't help us much because of their uneven precision distribution (and with no sign bit we can't even center the most precision on the middle of the encoding space), but maybe we can devise some tricks to make ten bits "look like" twelve.

In screen-space, if we were ok never to have perfectly smooth mirrors, we could add some noise to gain one bit (dithering). Another idea could be to "blur" normals while decoding them, looking at their neighbours, but it's slow and it would require some logic to avoid smoothing discontinuities and normal map details, so I didn't even start with that.

A better idea is to look at all the projections and see how they do distribute their precision. Most studies on normal encodings and sphere unwrapping aim to optimize precision over the entire sphere, but here we really don't want that. In fact, we already know we're using view-space exactly to "hide" some of the sphere space, where we'll place the projection discontinuity.

It's common knowledge that we need more than a hemisphere worth of normals for view-space encodings, but how much exactly, and why? 
Some of it is due to normal-mapping, that can push normals to face backwards, but that's at least questionable, as these normal-map details should have been hidden by occlusion, where they an actual geometric displacement. 
Most of the reason we need to consider more than an hemisphere is due to the perspective projection we do after view-space transform, which makes the view-vector not constant in screen-space. We can see normals around the sides of objects at the edges of the screen that point backwards in view-space.

In order to fight this we can build a view matrix per pixel using the view vector as the z-axis of the space and then encode only an hemisphere (or so) of normals around it. This works, and it really helps eliminating the "wobble" due to the fighting precisions of the encode and view-space matrix, but it's not particularly fast.

- Encode-space tricks.

Working with projections is fun, and I won't stop you from spending days thinking about them and how to make spherical projections work only on parts of the sphere or how to extend hemispherical projections to get a bit more "gutter" space, how to spend more bits on the front-faces and so on. Likely you'll re-derive Lambert's azimuthal equal-area projection a couple of times in the process, by mistake.

What I've found interesting though after all these experiments, is that you can make your life easier by looking at the encode space (the unit square you're projection the normals to) instead, starting with a simple projection (equal-area is a good choice):
  1. You can simply clip away some normals by "zooming" in the region of the encode space you'll need. 
  2. You can approximate the per-pixel view-space by shifting the encode space so per-pixel the view-vector is its center.
    • Furthermore, as your view vector doesn't vary that much, and as your projection doesn't distort these vectors that much, you can approximate the shift by a linear transform of the screen-space 2d coordinate...
  3. Most projections map normals to a disk, not the entire square (the only two I've found that don't suffer from that are the hemispherical octahedral and the polar space transform/cylindrical projection). You can then map that disk to a square to gain a bit of precision (there are many ways to do so).
  4. You can take any projection and change its precision distribution if needed by distorting the encode space! Again, many ways to do so, I'd recommend using a piecewise quadratic if you go that route as you'll need to invert whatever function you apply. You can either distort the square by applying a function to each axis, or do the same shifting normals in the disc (easy with the polar transform).
    • Note that we can do this not only for normal projections, but also for example to improve the sampling of directions in a cubemap, e.g. for reflections...
Lambert Equal Area

Same encode, with screen-space based "recentering"
Note how the encoding on the flat plane isn't constant anymore

- Encoding using multiple bases.

This recipe comes from chef Alex Fry, from Frostbite's kitchen. It was hinted at by Sebastien Lagarde in his Siggraph presentation but never detailed. This is how I understood it, but really Alex should (and told me he will) provide a detailed explanation (which I'll link here eventually).

When I saw the issues stemming from view-space encodings one of my thoughts was to go world-space. But then you reason a second and realize you can't, because of the inevitable discontinuities. 
For a moment I stopped thinking about a projection that double-covers rotations... But then you'll need to store a bit to identify which of the projections you are using (kinda like dual parabolic mapping for example) and that won't be "blendable" either. 
We could do some magic if we had wrapping in the blending but we don't, and the blending units aren't going to change anytime soon I believe. So I tossed this away.

And I was wrong. Alex's intuition is that you can indeed use multiple projections and you'll need to store extra bits to encode which projection you used... But, these bits can be read-only during decal-blending! 
We can just read which projection was used and project the decal normals in the same space, and live happily ever after! The key is of course to always chose a space that hides your projection discontinuities.


2-bits tangent spaces

3-bits tangent spaces

4-bits tangent spaces
There are many ways to implement this, but to me one very reasonable choice would be to consider the geometric normals when laying down the g-buffer, and chose from a small set of transforms the one with its main axis (the axis we'll orient our normal hemisphere towards during encoding) closest to the surface normal.

That choice can then be encoded in a few bits that we can stuff anywhere. The two "spare" bits of the 10-10-10-2 format would work, but we can "steal" some bits from any part of the g-buffer (even maintaining the ability to blend that component, by using the most significant bits, reading them in (as we need for the normal encoding anyways) and outputting that component remaining bits (that need to be blended) while keeping the MSB the same across all decals.

I'll probably share some code later, but a good choice would be to use the bases having one axis oriented as the coordinates of the vertices of a signed unitary cube (i.e. spanning minus to plus one), for which the remaining two tangent axes are easy to derive (won't need to re-orthonormalize based on an up vector as it's usually done).

Finally, note how this encode gives us a representation that plays nicely with hardware blending, but it's also more precise than a two-component 10-10 projection if done right. Each encoding space will in fact take care of a subset of the normals on a sphere, namely all the normals closest to its main axis. 
Thus, we know that the subsequent normal projection to a two component representation have to take care only of a subset of all possible normals (a hemisphere worth of them, for decals to be able to blend, plus the solid angle of the largest "cell" spanned by the tangent spaces on the sphere), and more spaces (and more bits) we have less normals we'll have to encode for each (e.g. with only one extra bit, the TS would divide the sphere in two halves, and we'll need in each to encode a sphere worth of normals to allow decals to blend, which would be a bad idea). 
So, if we engineer the projection to encode only the normals it has to, we can effectively gain precision as we add bits to the space selection. 

Conversely beware of encoding more normals than needed, as that can create creases as you move from one space to another on the sphere, when applying normal maps, as certain points will have more "slack" space to encode normals and some others less, you'll have to make sure you won't end up pushing the normals outside the tangent-space hemisphere at any surface point.

- Conclusions & Further references:

Once again we have a pretty fundamental problem in computer graphics that still is a potential source of so much research, and that still is not really solved.
If we look carefully at all our basic primitives, normals, colours, geometry, textures and so on, there is still a lot we don't know or we do badly, often without realizing the problems.
The two encoding strategies I wrote of are certainly a step forward, but still in 10 bits per components you will see quantization patterns on very smooth materials (sharp highlights, mirror-like reflections are the real test case for this). And none of the mappings is optimal, there are still opportunities to stretch and tweak the encoding to use its bits better. 
Fun times ahead.


A good test case with highly specular materials, flat surfaces, slowly rotating objects/view

06 September, 2014

Scientific Python 101

As for the Mathematica 101, after the (long) introduction I'll be talking with code...

Introduction to "Scientific Python"

In this I'll assume a basic knowledge of Python, if you need to get up to speed, learnXinYminute is the best resource for a programmer.

With "Scientific Python" I refer to an ecosystem of python packages built around NumPy/SciPy/IPython. I recommend installing a scientific python distribution, I think Anaconda is by far the best (PythonXY is an alternative), you could grab the packages from pypi/pip from any Python distribution, but it's more of a hassle.

NumPy is the building block for most other packages. It provides a matlab-like n-dimensional array class that provides fast computation via Blas/Lapack. It can be compiled with a variety of Blas implementations (Intel's MKL, Atlas, Netlib's, OpenBlas...), a perk of using a good distribution is that it usually comes with the fastest option for your system (which usually is multithreaded MKL). SciPy adds more numerical analysis routines on top of the basic operations provided by NumPy.

IPython (Jupyter) is a notebook-like interface similar to Mathematica's (really, it's a client-server infrastructure with different clients, but the only one that really matters is the HTML-based notebook one). 
An alternative environment is Spyder, which is more akin to Matlab's or Mathematica Workbench (a classic IDE) and also embeds IPython consoles for immediate code execution.

Especially when learning, it's probably best to start with IPython Notebooks.

Why I looked into SciPy

While I really like Mathematica for exploratory programming and scientific computation, there are a few reasons that compelled me to look for an alternative (other than Wolfram being an ass that I hate having to feed).

First of all, Mathematica is commercial -and- expensive (same as Matlab btw). Which really doesn't matter when I use it as a tool to explore ideas and make results that will be used somewhere else, but it's really bad as a programming language.

I wouldn't really want to redistribute the code I write in it, and even deploying "executables" is not free. Not to mention not many people know Mathematica to begin with.
Python, in comparison, is very well known, free, and integrated pretty much everywhere. I can drop my code directly in Maya (or any other package really, python is everywhere) for artists to use, for example.

Another big advantage is that Python is familiar, even for people that don't know it, it's a simple imperative scripting language.
Mathematica is in contrast a very odd Lisp, which will look strange at first even to people who know other Lisps. Also, it's mostly about symbolic computation, and the way it evaluate can be quite mysterious. CPython internals on the other hand, can be quite easily understood.

Lastly, a potential problem lies in the fact that python packages aren't guaranteed to have all the same licensing terms, and you might need many of them. Verifying that everything you end up installing can be used for commercial purposes is a bit of a hassle...

How does it fare?

It's free. It's integrated everywhere. It's familiar. It has lots of libraries. It works. It -can- be used as a Mathematica or Matlab replacement, while being free, so every time you need to redistribute your work (research!) it should be considered.

But it has still (many) weaknesses.

As a tool for exploratory programming, Mathematica is miles aheadIts documentation is great, it comes with a larger wealth of great tools and its visualization options are probably the best bar none.
Experimentation is an order of magnitude better if you have good visualization and interactivity support, and Mathematica, right now, kills the competition on that front. 
Manipulate[] is extremely simple, plotting is decently fast and the quality is quite high, there is lots of thought behind how the plots work, picking reasonable defaults, being numerically reliable and so on.

In Python on the other hand you get IPython and matplotlib. Ok, you got a ton of other libraries too, but matplotlib is popular and the basis of many others too. 
IPython can't display output if assignments are made, and displays only the last evaluated expression. Matplotlib is really slow, really ugly, and uses a ton of memory. Also you can either get it integrated in IPython, but with zero interactivity, or in a separate window, with just very bare-bones support for plot rotation/translation/scale.

There are other tools you can use, but most are 2D only, some are very fast and 3D but more cumbersome to use and so on and so forth...
Update: nowadays there are a few more libraries using WebGL, which are both fast and allow interactivity in IPython!

As a CAS I also expect Mathematica to be the best, you can do CAS in Python via SymPy/Sage/Mathics but I don't rely too much on that, personally, so I'm not in a position to evaluate.

Overall, I'll still be using Mathematica for many tasks, it's a great tool.

As a tool for numerical computation it fares better. Its main rival would be Matlab, whose strength really lies in the great toolboxes Mathworks provides. 
Even if the SciPy ecosystem is large with a good community, there are many areas where its packages are lacking, not well supported or immature.

Sadly though for the most Matlab is not that popular because of the unique functionality it provides, but because MathWorks markets well to the academia and it became the language of choice for many researchers and courses.
Also, researchers don't care about redistributing source nearly as much as they really should, this day and age it's all still about printed publications...

So, is Matlab dead? Not even close, and to be honest, there are many issues Python has to solve. Overall though, things are shifting already, and I really can't see a bright future for Matlab or its clones, as fundamentally Python is a much better language, and for research being open is probably the most important feature. We'll see.

A note on performance and exploration

For some reason, most of the languages for scientific exploratory programming are really slow. Python, Matlab, Mathematica, they are all fairly slow languages. 

The usual argument is that it doesn't matter at all, because these are scripting languages used to glue very high-performance numerical routines. And I would totally agree. If it didn't matter.
A language for exploratory programming has to be expressive and high-level, but also fast enough for the abstractions not to fall on their knees. Sadly, Python isn't.

Even with simple code, if you're processing a modicum amount of data, you'll need to know its internals, and the variety of options available for optimization. It's similar in this regard to Mathematica, where using functions like Compile often requires planning the code up-front to fit in the restrictions of such optimizers.

Empirically though it seems that the amount of time I had to spend minding performance patterns in Python is even higher than what I do in Mathematica. I suspect it's because many packages are pure python.

It's true that you can do all the optimization staying inside the interactive environment, not needing to change languages. That's not bad. But if you start having to spend a significant amount of time thinking about performance, instead of transforming data, it's a problem.

Also, it's a mystery to me why most scientific languages are not built for multithreading, at all. All of them, Python, Matlab and Mathematica, execute only some underlying C code in parallel (e.g. blas routines). But not anything else (all the routines not written in native code, often things such as plots, optimizers, integrators).

Even Julia, which was built specifically for performance, doesn't really do multithreading so far, just "green" threads (one at a time, like python) and multiprocessing.

Multiprocessing in Python is great, IPython makes it a breeze to configure a cluster of machines or even many processes on a local machine. But it still requires order of magnitudes more effort than threading, killing interactivity (push global objects, imports, functions, all manually across instances).

Mathematica at least does the multiprocessing data distribution automatically, detecting dependencies and input data that need to be transferred.

Learn by example: 



Other resources:

Tutorials
Packages
  • Scipy: numpy, scipy, matplotlib, sympy, pandas
  • Optimization and learning
  • Dill, a package that can serialize/snapshot a python kernel. Useful when one wants to stop working on an iPython session but want to be able to pick it up again from the same state next time.
  • Performance
    • A comparison of Cython, Numba, PyCuda, PyOpenCl, NumPy and other frameworks on a simple problem (Mandelbrot set)
    • SciPy Weave, inlines C code in Python code, compiles and links to python on demand. Deprecated. Try Cython instead.
    • Numba, a numpy "aware" compiler, targets LLVM, compiles in runtime (annotated functions)
    • Cython, compiles annotated python to C. Bottleneck uses it to accelerate some NumPy functions. (see also ShedskinPythran and ocl)
    • JobLib, makes multiprocessing easier (see IPython.Parallel too) but still not great as you can't have multithreading, multiprocessing means you'll have to send data around independent python interpreters :/
    • NumExpr, a fast interpreter of numerical expressions on arrays. Faster than numpy by aggregating operations (instead of doing one at at time)
    • WeldNumpy is another faster interpreter, the idea here is to lazy-evaluate expressions to be able to execute them more optimally.
    • Theano, targets cpu and gpu, numpy aware, automatic differentiation. Clumsy...
    • Nuikta, offline compiles python to C++, should support all extensions
    • PyPy, a JIT, with a tracing interpreter written in python. Doesn't support all extensions (the CPython C library interface)
    • Python/Cuda links
  • Non-homogeneous data
    • Blaze, like numpy but for non-homogeneous, incomplete data
    • PyTables, hierarchical data
  • Graphics/Plotting
    • For 3d animations, VisVis seems the only tool that is capable of achieving decent speed, quality, and has a good interface and support. It has a matlab-like interface, but actually creating objects (Line() instead of plot...) is much better/faster.
      • Update: Its successor is VisPy, at the time I first wrote this, it was still experimental. I have not tried it yet, but it seems better now.
      • Update: Ipyvolume seems viable too. 
    • Bokeh, nice plotting library, 2d only, outputs HTML5/JS so it can be interacted with in IPython Notebook. Somewhat lower-level than Matplotlib, albeit it does provide a bunch of plotting functions
      • Chaco is another 2d plot/gui library, very OO, similar to Bokeh it might require more code to create a graph
    • Matplotlib toolkits (MPL is SLOW and rather ugly, but it's the most supported):
      • Mplot3d, quite crappy 3d plots
      • Seaborn, good looking 2d plots
      • mpld3, a matplotlib compatible library that emits HTML5/JS using d3.js
      • NodeBox OpenGL is nifty, and DrawBot is very similar too (but OSX only at the moment). They actually derive from the same base sourcecode.
      • Point Cloud Library and PyGTS, Gnu Triangulated Surface Library
      • Others:
    # For anaconda windows distribution, to use mayavi you need to install
    # mayavi and wxpython, use from command line binstar search -t conda mayavi
    %gui wx
    %pylab wx
    %matplotlib inline
    # In Ipython Notebook %matplotlib has to come after pylab, it seems. 

    # "inline" is cool but "qt" and "wx" allows interactivity

    # qt is faster than wx, but mayavi requires wx

        • PyQwt 2d/3d library, faster than matplotlib but even uglier. PyQtGraph is another similar project. Good if interactivity is needed. Also provide GUI components to code interactive graphs.
        • DisLin, 2d/3d graphs. Does not seem to support animations
    Other

    03 September, 2014

    Notes on real-time renderers

    This accounts only for well-established methods, there are many other methods and combinations of methods I'm not covering. It's a sort of "recap".

    Forward
    Single pass over geometry generates "final" image, lights are bound to draw calls (via uniforms), accurate culling of light influence on geometry requires CSG splits. Multiple lights require either loops/branches in the shaders or shader permutations.
    • Benefits
      • Fastest in its baseline case (single light per pixel, "simple" shaders or even baked lighting). Doesn't have a "constant" up-front investment, you pay as you go (more lights, more textures...).
      • Least memory necessary (least bandwidth, at least in theory). Makes MSAA possible.
      • Easy to integrate with shadowmaps (can render them one-at-a-time, or almost)
      • No extra pass over geometry
      • Any material, except ones that require screen-space passes like Jimenez's SS-SSS
    • Issues
      • Culling lights on geometry requires geometrical splits (not a huge deal, actually). Can support "static" variations of shaders to customize for a given rendering case (number/type of lights, number of textures and so on) but "pays" such optimization with combinatorial explosion of shader cases and many more draw calls.
      • Culling dynamic lights can't be efficiently done. Scripted lights along fixed paths can be somewhat culled via geometry cutting, but fully dynamic lights can't efficiently cut geometry in runtime, just be assigned to objects, thus wasting computation.
      • Decals need to be multipass, lit twice. Alternatively, for static decals mesh can be cut and texture layering used (more shader variations), or for dynamic decals color can be splatted before main pass (but that costs an access to the offscreen buffer regardless or not a decal is there).
      • Complex shaders might not run optimally. As you have to do texturing and lighting (and shadowing) in the same pass, shaders can require a lot of registers and yield limited occupancy. Accessing many textures in sequence might create more trashing than accessing them in separate passes.
      • Lighting/texturing variations have to be dealt with dynamic branches which are often problematic for the shader compiler (must allocate registers for the worst case...), conditional moves (wasted work and registers) or shader permutations (combinatorial explosion)
      • Many "modern" rending effects require a depth/normal pre-pass anyways (i.e. SSAO, screen-space shadows, reflections and so on). Even though some of these can be faded out after a preset range and thus they can work with a partial pre-pass.
      • All shading is done on geometry, which means we pay all the eventual inefficiencies (e.g. partial quads, overdraw) on all shaders.

    Forward+ (light indexed)
    Forward+ is basically identical to forward but doesn't do any geometry split on the scene as a pre-pass, it relies on tiles or 3d grids ("clustered") to cull lights in runtime.
    • Benefits
      • Same memory as forward, more bandwidth. Enables MSAA.
      • Any material (same as forward)
      • Compared to forward, no mesh splitting necessary, much less shader permutations, less draw calls.
      • Compared to forward it handles dynamic lights with good culling.
    • Issues
      • Light occlusion culling requires a full depth pre-pass for a total of two geometrical passes. Can be somehow sidestepped with a clustered light grid, if you don't have to end up splatting too many lights into it.
      • All shadowmaps need to be generated upfront (more memory) or splatted in screen-space in a pre-pass.
      • All lighting permutations need to be addressed as dynamic branches in the shader. Not good if we need to support many kinds of light/shadow types. In cases where simple lighting is needed, still has to pay the price of a monolithic ubershader that has to consider any lighting scenario.
      • Compared to forward, seems a steep price to pay to just get rid of geometry cutting. Note that even if it "solved" shader permutations, its solution is the same as doing forward with shaders that dynamic branch over light types/number of lights and setting these parameters per draw call.

    Deferred shading
    Geometry pass renders a buffer of material attributes (and other proprieties needed for lighting but bound to geometry, e.g. lightmaps, vertex-baked lighting...). Lighting is computed in screenspace either by rendering volumes (stencil) or by using tiling. Multiple shading equations need either to be handled via branches in the lighting shaders, or via multiple passes per light.
    • Benefits
      • Decouples texturing from lighting. Executes only texturing on geometry so it suffers less from partial quads, overdraw. Also, potentially can be faster on complex shaders (as discussed in the forward rendering issues).
      • Allows volumetric or multipass decals (and special effects) on the GBuffer (without computing the lighting twice).
      • Allows full-screen material passes like analytic geometric specular antialiasing (pre-filtering), which really works only done on the GBuffer, in forward it fails on all hard edges (split normals), and screen-space subsurface scattering.
      • Less draw calls, less shader permutations, one or few lighting shaders that can be hand-optimized well.
    • Issues
      • Uses more memory and bandwidth. Might be slower due to more memory communication needed, especially on areas with simple lighting.
      • Doesn't handle transparencies easily. If a tiled or clustered deferred is used, the light information can be passed to a forward+ pass for transparencies.
      • Limits materials that need many different material parameters to be passed from geometry to lighting (GBuffer), even if shader variation for material in a modern PBR renderer tends not to be a problem.
      • Can't do lighting computations per object/vertex (i.e. GI), needs to pass everything per pixel in the GBuffer. An alternative is to store baked data in a voxel structure.
      • Accessing lighting related textures (gobos, cubemaps) might be less cache-coherent.
      • In general it has lots of enticing benefits over forward, and it -might- be faster in complex lighting/material/decal scenarios, but the baseline simple lighting/shading case is much more expensive.
    Notes on tiled/clustered versus "stenciled" techniques
    On older hardware early-stencil was limited to a single bit, so it couldn't be used both to mark the light volume and distinguish surface types. Tiled could be needed as it allowed more material variety by categorizing tiles and issuing multiple tile draws if needed.

    On newer hardware tiled benefits lie in the ability of reducing bandwidth by processing all lights in a tile in a single shader. It also has some benefits for very small lights as these might stall early in the pipeline of the rasterizer, if drawn as volumes (draws that generate too little PS work). 

    In fact most tile renderers demos like to show thousands of lights in view... But the reality is that it's still tricky to afford many shadowed lights per pixel in any case (even on nextgen where we have enough memory to cache shadowmaps), and unshadowed, cheap lights are worse than no lighting at all.

    Often, these cheap unshadowed lights are used as "fill", a cheap replacement for GI. This is not an unreasonable use case, but there are better ways, and standard lights, even when diffuse only, are actually not a great representation of indirect radiance. 
    Voxel, vertex and lightmap bakes are often superior, or one could thing of special fill volumes that can take more space, embedding some radiance representation and falloff in them.

    In fact one of the typical "deferred" looks that many games still have today is characterized by "many" cheap point lights without shadowing (nor GI, nor gobos...), creating ugly circular splotches in the scene. 
    Also tiled/clustered makes dynamic shadows somewhat harder, as you can't render one shadowmap at a time...

    Tiled and clustered have their reasons, but demo scenes with thousands of cheap point lights are not one. Mostly they are interesting if you can compute other interesting data per tile/voxel.
    You can still get a BW saving in "realistic" scenes with low overlap of lighting volumes, but it's a tradeoff between that and accurate per-quad light culling you get from a modern early-z volume renderer.

    Deferred lighting
    I'd say this is dwindling technique nowadays compared to deferred shading.
    • Benefits
      • It requires less memory per pass, but the total memory traffic summing all passes is roughly the same. The former used to be -very- important due to the limited EDRAM memory on xbox 360 (and its inability to render outside EDRAM).
      • In theory allows more material "hacks", but it's still very limited. In fact I'd say the material expressiveness is identical to deferred shading, but you can add "extra" lighting in the second geometry pass. On deferred shading that has to be passed along using extra GBuffer space.
      • Allows shadows to be generated and used per light, instead of all upfront like deferred shading/forward+
    • Issues
      • An extra geometric pass (could be avoided by using the same GBuffer as deferred shading, then doing lighting and compositing with the textures in separate fullscreen passes - but then it's almost more a variant of DS than DL imho)
    Some Links: