Search this blog

Loading...

Saturday, March 21, 2015

Look closely

The BRDF models surface roughness at a frequency bigger than wavelength, but smaller than observation scale. 

In practice surfaces have details at many different scales, and we know that the BRDF has to change depending on the observation scale, that is for us the pixel projected footprint onto the surface.

Specular antialiasing of normal maps (from Toksvig onwards) which is very popular nowadays models exactly the fact that at a given scale the surface roughness, in this case represented by normal maps, has to be incorporated into the BRDF. The pixel footprint is automatically considered by mipmapping.

Consider now what happens if we look at a specific frequency of surface detail. If we look at it close enough, on an interval of a fraction of the frequency wavelength, the detail it will “disappear” and the surface will look smooth, determined only by the underlying material BRDF and an average normal over such interval.


If we “zoom out” a bit though, at scales circa proportional to the detail wavelength, the surface starts gets complicated. We can’t anymore represent it over such intervals as a single normal, nor it’s easy to capture such detail in a simple BRDF, we’ll need multiple lobes or more complicated expressions.

Zoom out even more and now your observation interval covers many wavelengths, and the surface properties look like they are representable again in a statistical fashion. If you’re lucky it might be in a way that is possible to capture by the same analytic BRDF of the underlying material, with a new choice of parameters. If you are less lucky, it might require a more powerful BRDF model, but still we can reasonably expect to be able to represent the surface in a simple, analytic, statistical fashion. This reasoning by the way, also applies to normalmap specular antialiasing as well...

The question is now, in practice how do surfaces look like? At what scales do they have detail? Are there scales that we don’t consider, that is, that are not something we represent in geometry and normal maps, but that are not either representable with the simple BRDF models we do use?

- Surfaces sparkle!

In games nowadays, we easily look at pixel footprints of a square millimetre or so, for example that is roughly the footprint on a character’s face when seen in a foreground full body shot, in full HD resolution.

If you look around many real world surfaces, over an integration area of a square millimetre or so, a lot of materials “sparkle”, they have either multiple BRDF lobes or very “noisy” BRDF lobes.



If you look closely at most surfaces, they won’t have smooth, clear, smooth specular highlights. Noise is everywhere, often "by design" e.g. with most formica countertops, and sometimes "naturally" even in plants, soft plastics, leather and skin notably.


Some surfaces have stronger spikes in certain directions and definitely “sparkle” visibly, even at scales we render today. Road asphalt is an easy and common example.


There are surfaces that have roughness at a frequency that is high enough to be represented always by a simple BRDF, but many are not. Some hard plastics (if not textured), glass, glazed ceramic, paper, some smooth paints.

Note: Eyes can be deceptive in this kind of investigation, for example Apple’s aluminum on a macbook or iphone looks “sparkly” in real life, but its surface roughness is probably too small to matter at the scales we render. 
This opens another can or worm that I won’t discuss, of what to do with materials that people “know” should behave in a given way, but that don’t really at the resolutions we can currently render at...

My Macbook Pro Retina

- Surfaces are anisotropic!

Anisotropy is much more frequent than one might suspect, especially for metals. We tend to think that only “brushed” finishes are really anisotropic, but when you start looking around you notice that many metals shows anisotropy, due to the methods used to shape them. 
Similar artifacts are sometimes found also in plastics (especially soft or laminated) or paints.


On the left: matte wall paint and a detail of its texture
On the right: the metal cylinder looks isotropic, but close up reveals some structure
It's also interesting to notice how the anisotropy itself can't be really captured by a single parameter, skewing the BRDF in a tangent space, but that there are many different distributions of anisotropic roughnesses.

- Surfaces aren’t uniform!

At a higher scale, if you have to bet how any surface looks at the resolutions we author specular roughness maps, it will always somewhat be non-uniform pixel to pixel.

You can imagine how if it’s true that many surfaces have roughness at frequencies that would require complex, multi-lobe BRDFs to render at our current resolutions, how it is even more likely that our BRDFs won’t ever have the exact same behaviour pixel to pixel.


Another way of thinking of the effects of using simpler BRDFs driven by uniform parameters is that we are losing resolution. We are using a surface representation that is probably reasonable at some larger scale, per pixel at a scale at which it doesn’t apply. So it’s somewhat similar to doing to right thing with wrong, “oversized” pixel footprints. In other words, over-blurring.

- Surfaces aren’t flat!

Flat surfaces aren’t really flat. Or at least it’s rare, if you observe a reflection over smooth surfaces as it “travels” along them, not to notice subtle distortions.

Have you ever noticed the way large glass windows for example reflect? Or ceramic tiles?


This effect is at a much lower frequency of course of the ones described above, but it’s still interesting. Certain things that have to be flat in practice are, to the degree we care about in rendering (mirrors for example), but most others are not.

- Conclusions

My current interest in observing surfaces has been sparkled by the fact I’m testing (and have always been interested) hardware solutions for BRDF scanning, thus I needed a number of surface samples to play with, ideally uniform, isotropic, well-behaved. And I found it was really hard to find such materials!



How much does this matter, perceptually, and when? I’m not yet sure. As it's tricky to understand what needs more complex BRDFs and what can be reasonably modeled with rigorous authoring of materials.

Textures from The Order 1886
Specular highlights are always "broken" via gloss noise,
even on fairly smooth surfaces

Some links:

Saturday, March 14, 2015

Design Optimization Landscape


  • How consciously do we navigate this?
    • Knowledge vs Prototyping
    • Width vs Depth of exploration
    • Speculation is "fast" to move but with uncertainty
    • Application focuses and finds new constraints, but it's expensive
  • Multidimensional and Multiobjective
  • Fuzzy/Noisy, Changing over time
  • We are all optimizers
    • We keep a model of the design landscape, updated by information (experiments, knowledge). Biased by our psychology
    • We try to "sample" promising areas to find a good solution
    • Similar to Bayesian Optimization (information directed sampling)
Bayesian Optimization. Used in black-box problems that
have a high sampling (evaluation) cost.

OT: The design space of fountain pens

Met Stephen Hill at GDC this year, he casually mentioned that I should write an article about pens. Well Stephen maybe I WILL.

I try to live a reasonable life, but there are two things I do posses in more quantities I should: writing and photographic equipment. I would say that I collect them, but I don't keep these as a collector would, I actually use them with little regard, so I'm more just a compulsive buyer, I guess. 

But with much wasted money comes experience, or something.

- Why fountain pens

Calligraphy, duh! Line variation and reasons. Seriously though, they are different, and really it's a matter of taste... The feeling is different, they require less pressure, the ink is different... But nowadays rollerballs and gel pens have so many tips and technologies it's hard to compare. 
Also, on a purely utilitarian scale, I believe nothing can win a simple 0.5mm mechanical pencil...

Me writing this article.
Pen is a Namiki Vanishing Point ExtraFine
Notebook is a Midori Spiral Ring

So for the most part it is a personal choice, a matter of taste. I like them, they are elegant weapons for a more civilized age, and you might too. 
Now, without further ado, let's delve into this guide on how to start spending way too much money on pens.

- Nibs

First and foremost a fountain pen is about its nib. There are two main axes of nib selection: shape and material.

For shape, most pen brands will make three sizes of round tips: fine, medium and broad. Fancier brands might expand to extra fine, extra (or ultra or double) broad and maybe even ultra extra fine (sometimes also called needlepoint or accounting nib).

The catch here is that for the most part, these names carry little meaning. Especially on the finer scale the differences can be huge, traditionally Japanese nibs are finer, but some Japanese brands don't follow the rule.

A needlepoint nib (disassembled for repair), hand ground (Franklin-Christoph)

Italic, slab, oblique, cursive nibs are all variations of non round nibs, they produce a finer line in certain directions and a bolder one in others. Italic and slab are cut straight, with the italic being sharper (more difference between writing directions), crisper and harder to use. The oblique nib is cut at an angle. All these come in different sizes, usually specified as millimeters of their wider angle. Very wide stub nibs are also called "music" and often have more than a single ink slit, to keep the ink flowing.

Selection of Lamy steel nibs

More exotic nibs can be trickier to use and usually require better pens to work well. Bolder nibs lay down more ink, and thus stress the pen's ability of keeping a good, constant flow. Finer nibs are easier to break or misalign, they are harder to make and to make so they write smoothly. Very sharp italic nibs somewhat inherit the worst traits of both.

Consider also that broader nibs will use more ink (deplete faster), the ink will require more time to dry and can bleed more, but many people do like them better for fine writing as the properties of the ink (shading variation, sheen, color) show more with a wetter and more varied line.

Ink shading from an Italic nib
Image source: https://wonderpens.wordpress.com/tag/rhodia/

In terms of materials, there are really only two options: steel and gold. Both can then be plated in different materials (rhodium, ruthenium, pink gold, two-tone and so on) but that is only an aesthetic matter.

The functional difference between steel and gold is that the latter is softer, more flexible, thus it writes more smoothly and with more line variation. Steel is more durable and better for heavy handed writers.
Somewhat confusingly, both materials can be used to make flex and semi-flex nibs, which are thinner and specifically made to give lots of line variation. They are quite hard to use and suited mostly for calligraphy.

A Piltot/Namiki Falcon flex pen
Image source: https://www.youtube.com/watch?v=XMolEvB5EqA

Most pens have interchangeable nibs, and buying nibs alone is usually much cheaper than buying a full pen.

- Pen body

A big part in the choice of a pen is taken by its aesthetic, which is I guess entirely a matter of taste so I won't discuss it.

There are though a few functional considerations to keep in mind. Ergonomy of course is a big one. Bigger pens tend to be more comfortable but of course, less easy to carry around. Heavier pens might not be great for longer writing sessions, and balance can make a lot of difference.
For the most part, you'll have to try and see what fits you best. Remember to try any given pen with and without the cap posted, the balance will change significantly, with some pens designed to be posted while some others don't post very well.

The Franklin-Christoph 40 pocket need to be used with its cap posted,
it's way to short otherwise. Screw-on cap, clipless, can be converted to eyedropper

The filling mechanism and ink reservoir is also important. Most pens nowadays use plastic cartridges, most being "international standard". 
The second most widespread mechanism is the piston filler, which is quite convenient and usually has chambers that can carry more ink than a cartridge, but it won't allow you to carry spare ink as easily.

Now, you really will want to use bottled ink in your fountain pens, both because it's cheaper and it comes in a much wider selection, but having a cartridge pen won't stop you. Most of them can be fitted with "converters", special cartridges with a piston to suck ink from a bottle, and you can always refill a cartridge with a syringe (which I actually find less messy than dipping the nib in the bottle to refill).
Also, many (but not all) will work well as "eyedroppers", filling the cartridge chamber directly with ink (without a cartridge installed) and sealing it (a bit of with silicone grease on the screw)

There are other minor things to notice. As most pens are round, having a cap with a clip allows them not to roll, which might be something to consider even if you don't need to clip your pen to a notebook.

Nakaya "dorsal fin" model, an asymmetric design made to not roll even w/o a clip
Image source: http://www.leighreyes.com/?p=4313

The cap design and closing mechanism also matter, actually more than it might seem. Not only certain caps fit better posted than others, but certain designs are more prone to sucking some ink out every time you uncap. Screw on caps are less prone to this, but certain screws can be annoying to feel on the barrel of the pen, depending on how you hold it.

- Ink

A big reason to use fountain pens is that they allow to play with different inks. It might be actually a much more reasonable idea to collect and play with inks, than it is with different fountain pens.

Inks have lots of different attributes, even colors are not so simple as many inks can "shade", show variation (even drastic) as the pen lays down more or less ink on the page (according to pressure and speed), they can have sheen and even pigment or other particles embedded (these though are often more dangerous to use and can clog a pen if not properly handled)

Inks can be even more interesting than pens!
The Pen Addict is a good review site

They can be more or less lubricated, certain inks can flow well even in lesser pens while certain others tend to be more dry. If your pen is already on the dry side, you don't want to couple it with a dry ink, and vice-versa.

Different inks also have different drying times, and tendency to feather or bleed through paper. Good paper will also help absorb less, but that also means it can increase dry times.

It's not in general "safe" to mix different inks, albeit most of the time it won't cause havoc and you can easily clean your pen by just running it under cold water until it flushes clean. There are certain brands who make mixable inks, but it's rare.

- Recommendations

I will make a sweeping statement and say that there is no better "starter" fountain pen than a Lamy Safari (or Vista, a so called "demonstrator" - transparent version). Its aesthetic might not please everybody, but it's by far the best "writer" for the price, and it comes in a ridiculously wide selection of interchangeable nibs (they even make some optimized for left-handed writing).

A fairly recent contender to this throne is the TWSBI 850 and Mini, really great pens made to be fully disassembled easily. The Mini is probably the best compact pen you can buy today, it's a piston filler so it still holds quite a lot of ink too!

If you look for a great very extra-fine nib pen, I haven't so far found anything that beats the Pilot/Namiki Vanishing Point 18k gold nib (also called Capless Decimo). It's also pretty and unique. Some don't love its clip, with some effort it could be removed.

A&G Spalding and Bros make surprisingly good, cheap pens (considering the brand doesn't have a big history). Kaweco is cheaper brand recently gaining traction, but I don't like so far their nibs flow, especially on small pens you want -very- easy writing nibs, as these are to begin with not the most comfortable pens and applying pressure is fatiguing on them.

On the more expensive side, I would say to stay away from Montblanc and the other luxury brands, they are good pens but you pay because they are fancy more than because they are great. If you have lots of money or you want to make a really great gift, I'd personally go with a Nakaya, handmade and customized to your taste...

Medium-tier brands that I love, other than the already mentioned Namiki/Pilot (which also makes super expensive maki-e models, by the way) are Sailor and Platinum, both of them make great nibs (and true "Japanese" extra-fine ones) but somewhat more boring conventional "cigar" shaped pens. 
Franklin-Christoph is American brand which makes really unique, hand turned and not very expensive pens, worth a look.

There are of course many, many other great brands, certain fancy brands do make more "understated" models in their line which might turn out to be great, and vintage, used pens are also incredibly interesting, but all these I'd say would be less easy to recommend as a "first" pen.

After you get a pen you'll need paper and ink. Rhodia makes some great, inexpensive paper, but there are really many great brands. Field notes is really nice as well if you like small notebooks. Tomoe River paper is quite unique as well, but more a "fine writing" paper, not for daily use (will take time to dry especially for broader nibs).

I personally prefer spiral bound, a5 notebooks because they are easier to use on the go, they open fully and are more rigid, can be held one handed. And if you are like me you don't love to have ruled or gridded paper, Rhodia and many other brands make notebooks in plain sheets or with less conspicuous dots instead of lines.

Lastly, inks. For Black I'll go with Aurora or the Platinum Carbon Black Ink, both are very black with great flow. The latter is pigmented which is very rare (another pigment ink is Sailor's Kiwa-Guro, that I haven't tried yet) means it can settle in your pen if not used often and should be cleaned after depleting to avoid clogging.

For colored ink it's much, much harder, there are so many great options. I don't love plain blue ink and I usually go with either darker or lighter shades, one of my current favourites is the Private Reserve Naples Blue.
Sometimes I carry a red or more colourful ink, often in a broader nib for highlighting and so on. I find that orange/brown colors pair better with both black and blue than most reds. Noodler's Apache Sunset.
Lastly, if you want something super fancy, nothing is fancier than Herbin's Stormy Grey and Rouge Hematite limited edition inks (if you can still find them).

Incidentally, J.Herbin, Private Reserve and Noodler's together with Diamine are also the brands that make the most variety when it comes to colored inks. Noodler is known to be also quite cheap.

Amazing (but not the smoothest ink ever).
Image source: http://www.gourmetpens.com/2014/11/review-j-herbin-stormy-grey-ink.html#.VQTK7VPF-xN

Friday, February 27, 2015

Why the rendering in The Order 1886 rocks.

Premise: Initially I thought I would post an "analysis" similar to the one I did on Battlefield a while ago, just my personal notes and screenshots taken as I played the game, isolated from any outside source of information (discussions between coworkers and other people, which I bet are happening everywhere right now across the industry). 
In the end I took an even more "high level" approach, so these notes won't really talk about techniques and speculations (or at least, that's not the main point of view).

This is also because I am quite persuaded nowadays that being focused on what matters in an image, being really anal about image quality (correctness, perception) matters more than the specific techniques.

Certainly it matters more than "blindly" implementing a checklist of cool technology, better do less and even remove features, but be really conscious about what makes an image and why, rather than trying to add more and more fancy things without understanding.

Also, in the following I'll often say "you can/can't notice...". I have to specify, because it matters, that I mean to analyze things that you can notice during mostly "normal" gameplay (in a dark room, with a good projector on a sizeable screen), albeit undertaken by a rendering nerd. 
Which is different from the work of actually trying to pixel-peep and reverse and break everything on purpose (not that it's easy anyhow in this game...). Which is -still- different from the (very hard) work needed to find all mistakes, as many are don't register rationally, but still are perceptually important.

1) Image stability. Technology that doesn't "show".

Everybody is raving about The Order's antialiasing, and rightfully so. We know it's some sort of 4x MSAA (nowadays that still leaves open a lot of variables on how it's implemented...) and that certainly helps a lot.
But it's not just 4x MSAA, we've seen many titles with MSAA, and you can even crank it on PC titles, yet I'd say nothing before came close to the "offline" rendering feeling The Order exhibits.

The air ship level is particularly impressive. Thin wire AA? Neat use of alpha-test?
And I think that's not just supersampling, but it's paying attention to pixel quality overall. Supersample what can be supersampled, to the extent it can be, filter out the rest, or move it into noise. It's not just antialiasing, it's the whole post-effect pipeline that works together (if you pixel-peek you can actually notice how the motion blur sometimes actively helps killing a tiny bit of leftover specular shimmering). 

Do less (in this case, -show- less pixel details) but better (more stable).

Some of the blurs can hinder a bit gameplay, but the image quality is undeniable.
And while I'm not personally against screen-space effects, in The Order they are noticeable for their absence. I spent some time trying to see if they had SSAO, SS-reflections and so on, and I didn't see anything.
And then you realize, all these techniques, at least as they have been implemented so far, can be spotted, they are not stable, they have telling artifacts.

If a rendering technique "shows", it's already a sign that there's something wrong (e.g. you can say this image has limited depth of field, but if you can spot, this image uses a separable blur, then that's already a problem).

The Order is remarkable even in that regard, a very technical, trained eye might form some educated guesses, but very vaguely I'd say it's really hard to pinpoint most of the specific techniques.

2) Occlusions. Lighting without leaking.

I always say, better to have missing lights that light where there shouldn't be.


Even in photography it's easier to see added light than "removed".
Even if you can't spill due to lack of shadowing...
Behind the scenes from Peter Lindbergh and Gregory Crewdson.
Even the unfortunate concession we sometimes do to gameplay, of adding "character" lights to better separate them from the background, visually can be quite "disturbing", but there it is by design - it's supposed to show.

Dark Souls 2: not the first, nor the last game to highlight characters.
Don't add an unshadowed light if it's noticeable. And it will -always- be noticeable if it leaks behind surfaces.
While for example a "hair light" on a character in a cinematic can be quite hard to register, a ceiling light shining under a table "disconnects" surfaces and kills realism.

Nowadays occlusion is becoming "easier" with the ability of rendering more shadowmaps (albeit people still complain about the intricacies of shadowmapping) and with more memory many things can be cached as well.
Static, diffuse indirect global illumination is also not a huge deal (e.g. lightmaps and probes).

But specular will kill you. It's quite an hard problem. Bright highlights shining around object silhouettes, behind walls and so on, very tricky to occlude with ambient occlusion and such non-strongly directional methods exactly due to their intensity.

If you've ever played with screen-space reflections you might have noticed, more than solving the problem of missing reflections, they are useful because they effectively capture the right occlusion (together with reflection) of objects in contact with surfaces (which won't be captured with a cubemap probe, even after parallax correction as a box).

This recent "Paris" scene, done in Unreal, albeit very nice, clearly shows
how specular is hard and screenspace is not enough: see it in motion.
Again, The Order doesn't reveal its hand, whatever it uses, it just works.

The importance of specular leaks, together with the fact that it's better to over-occlude than to leak, is always why I think if you can't do anything more, at least baking bent normals (even to the point of having only bent normals, without carrying two sets) pays off.

3) Atmospherics. Shading the space in-between surfaces.

We always had some atmospherics. Fog, "ground" fog, "god" rays... 

What The Order does there is quite interesting though. London is foggy, and the air is not some kind of afterthought special effect. It's a protagonist in the shading of the image.

It's a recurring theme by now, but again, you can't really "see" it as a special effect or as a specific technique. It's a meaningful contribution to the image that it's much more subtle and harder to capture.


The surprising thing to me is how actually I couldn't really notice dynamic occlusion effects in the fog (volumetric shadows or god rays). Rather what surprises is not such effects but the fact that -every- light scatters, everything subtly changes the color of the fog.

And again the lack of artifacts. You can't see voxels, you can't see leaks, you can't notice particles rotating and fading in and out. Marvellous.

4) Materials. Attention to detail.

To each technique its own abuses. First was bloom. Then lens flares. Then depth of field and so on...
There's always a new "must have" feature that gets cranked to eleven to be "cool" and hip until everyone gets disgusted and dials back, just to latch to something else. 

Stop.
For PBR it seems metals can be problematic.  More than one game I've seen nowadays that pushes shiny, perfectly polished stuff everywhere, I can't really understand why.

Dragon Age Origins: a wonderful game done with a wonderful engine.
But it really has some issues with shiny metals, the Orlais palace is one of the worst offenders.
The Order instead doesn't ever lose its composure. Materials are never constant, are never flat, they are varied and realistic and realistically blend into each other. Even small details, light lightbulbs and refraction, or the sheen of the ink on printed paper, is great.
Texture resolution is constant, you never notice issues between textures of nearby objects with different densities.

Physically based rendering, done mindfully.

Bonus round: baking.

Many games, especially around the period of time when deferred got all the hype, lost many of these points (made them worse) by sacrificing baked solutions to real-time computations, often in a misguided attempt of solving authoring problems (which should be the domain of better -tools-, not runtime changes).

Real-time solutions are still useful when gameplay requires dynamic scenes and updates (and we can't stream these!), but if you need realtime feedback on GI, write a realtime path tracer for your lightmaps...

Bonus round: F+.

The Order 1886 is famously based on a forward+ lighting engine. I actually wonder how much different it would have been on a old-school "splitting" (static light assignment, to geometry) forward renderer with generous amounts of baking of "secondary" lights. Don't know.

Bonus round: Next-gen and PC hardware.

GPU power won't matter until we can specifically target it.
The Order shows that, and that's why a PS4 title can look better than other games which were still targeting the previous console generation and then where upgraded with a few extra techniques to push high-end PC GPUs.

Pushing certain marginal stuff to eleven doesn't make quite as much of a quality difference than creating assets and effects specifically for more powerful GPUs, and that's why even if PCs were already more powerful than these consoles at launch, we have to wait for console games to improve in order to see real advances in games graphics.

Saturday, January 24, 2015

Notes on G-Buffer normal encodings

Tonight on Masterchef, we'll try to cook a G-Buffer encoding that is:
  • Fast when implemented in a shader
  • As compact as possible
  • Makes sense under linear interpolation (hardware "blendable", for pixel-shader based decals)
    • So we assume no pixel sync custom blending. On a fixed hardware like console's GPUs it would be probably possible to sort decals to not overlap often in screen-space and add waits so read-write of the same buffer never generates data races, but it's not easy.
  • As stable as possible, and secondarily as precise as possible.

http://blog.geogarage.com/2013/03/what-are-seven-seas.html

Normal encodings are a well studied topic, both in computer science and due to their relation with sphere unwrapping and cartographic projections of the globe. Each of the aforementioned objectives, taken singularly, is quite simple to achieve.

Nothing is faster than keeping normals in their natural representations, as a three component vector, and this representation is also the one that makes most sense under linear interpolation (renormalizing afterwards of course). 
Unfortunately storing three components means we're wasting a lot of bits, as most bit combinations will yield something that is not a normal vector, so most encodings are unused. In fact we're using just a thin surface of a sphere of valid encodings inside a cubic space of minus one to one components (in 32bit floating point we'll get about 51bits worth of normals out of the 3x32=96bits storage).

If we wanted to go as compact as possible, Crytek's "best fit" normals are one of the best possible representations, together with the octahedral projection (which has a faster encoding and almost the same decoding cost).
Best fit normals aren't the absolute optimal, as they chose the closest representation among the directions contained in the encoding cube space, so they are still constrained to a fixed set of directions, but they are quite great and easy to implement in a shader.
Unfortunately these schemes aren't "blendable" at all. Octahedral normals have discontinuities on half of the normal hemisphere which won't allow blending even if we considered the encoding square to wrap around in a toroidal topology. Best fit normals are all of different lengths, even for very close directions (that's they key for not wasting encode space), so really there is no continuity at all.

Finally, stability. That usually comes from world-space normal encodings, on the account that most games have a varying view more often than they have moving objects with a static view. 
World-space two-component encodings can't be hardware "blendable" though, as they will always have a discontinuity on the normal sphere in order to unwrap it.
Object-space and tangent-space encodings are hard because we'll have to store these spaces in the g-buffer, which ends up taking encode space
.

View-space can allow blending of two-component encodings by "hiding" the discontinuity in the back-faces of objects so we don't "see" it, but in practice with less than 12 bits per component you end up seeing wobbling on very smooth specular objects as not only normals "snap" into their next representable position as we strafe the camera, but there is also a "fighting" of the precise view-to-world transform we apply during decoding with the quantized normal representation. As we don't have 12bit frame-buffer formats, we'll need to use two 16-bit components, which is not very compact.

So you get quite a puzzle to solve. In the following I'll sketch two possible recipes to try to alleviate these problems.

- Improving view-space projection.

We don't have a twelve bits format. But we do have a 10-10-10-2 format and a 11-11-10 one. The eleven bits floats don't help us much because of their uneven precision distribution (and with no sign bit we can't even center the most precision on the middle of the encoding space), but maybe we can devise some tricks to make ten bits "look like" twelve.

In screen-space, if we were ok never to have perfectly smooth mirrors, we could add some noise to gain one bit (dithering). Another idea could be to "blur" normals while decoding them, looking at their neighbours, but it's slow and it would require some logic to avoid smoothing discontinuities and normal map details, so I didn't even start with that.

A better idea is to look at all the projections and see how they do distribute their precision. Most studies on normal encodings and sphere unwrapping aim to optimize precision over the entire sphere, but here we really don't want that. In fact, we already know we're using view-space exactly to "hide" some of the sphere space, where we'll place the projection discontinuity.

It's common knowledge that we need more than a hemisphere worth of normals for view-space encodings, but how much exactly, and why? 
Some of it is due to normal-mapping, that can push normals to face backwards, but that's at least questionable, as these normal-map details should have been hidden by occlusion, where they an actual geometric displacement. 
Most of the reason we need to consider more than an hemisphere is due to the perspective projection we do after view-space transform, which makes the view-vector not constant in screen-space. We can see normals around the sides of objects at the edges of the screen that point backwards in view-space.

In order to fight this we can build a view matrix per pixel using the view vector as the z-axis of the space and then encode only an hemisphere (or so) of normals around it. This works, and it really helps eliminating the "wobble" due to the fighting precisions of the encode and view-space matrix, but it's not particularly fast.

- Encode-space tricks.

Working with projections is fun, and I won't stop you from spending days thinking about them and how to make spherical projections work only on parts of the sphere or how to extend hemispherical projections to get a bit more "gutter" space, how to spend more bits on the front-faces and so on. Likely you'll re-derive Lambert's azimuthal equal-area projection a couple of times in the process, by mistake.

What I've found interesting though after all these experiments, is that you can make your life easier by looking at the encode space (the unit square you're projection the normals to) instead, starting with a simple projection (equal-area is a good choice):
  1. You can simply clip away some normals by "zooming" in the region of the encode space you'll need. 
  2. You can approximate the per-pixel view-space by shifting the encode space so per-pixel the view-vector is its center.
    • Furthermore, as your view vector doesn't vary that much, and as your projection doesn't distort these vectors that much, you can approximate the shift by a linear transform of the screen-space 2d coordinate...
  3. Most projections map normals to a disk, not the entire square (the only two I've found that don't suffer from that are the hemispherical octahedral and the polar space transform/cylindrical projection). You can then map that disk to a square to gain a bit of precision (there are many ways to do so).
  4. You can take any projection and change its precision distribution if needed by distorting the encode space! Again, many ways to do so, I'd recommend using a piecewise quadratic if you go that route as you'll need to invert whatever function you apply. You can either distort the square by applying a function to each axis, or do the same shifting normals in the disc (easy with the polar transform).
    • Note that we can do this not only for normal projections, but also for example to improve the sampling of directions in a cubemap, e.g. for reflections...
Lambert Equal Area

Same encode, with screen-space based "recentering"
Note how the encoding on the flat plane isn't constant anymore

- Encoding using multiple bases.

This recipe comes from chef Alex Fry, from Frostbite's kitchen. It was hinted at by Sebastien Lagarde in his Siggraph presentation but never detailed. This is how I understood it, but really Alex should (and told me he will) provide a detailed explanation (which I'll link here eventually).

When I saw the issues stemming from view-space encodings one of my thoughts was to go world-space. But then you reason a second and realize you can't, because of the inevitable discontinuities. 
For a moment I stopped thinking about a projection that double-covers rotations... But then you'll need to store a bit to identify which of the projections you are using (kinda like dual parabolic mapping for example) and that won't be "blendable" either. 
We could do some magic if we had wrapping in the blending but we don't, and the blending units aren't going to change anytime soon I believe. So I tossed this away.

And I was wrong. Alex's intuition is that you can indeed use multiple projections and you'll need to store extra bits to encode which projection you used... But, these bits can be read-only during decal-blending! 
We can just read which projection was used and project the decal normals in the same space, and live happily ever after! The key is of course to always chose a space that hides your projection discontinuities.


2-bits tangent spaces

3-bits tangent spaces

4-bits tangent spaces
There are many ways to implement this, but to me one very reasonable choice would be to consider the geometric normals when laying down the g-buffer, and chose from a small set of transforms the one with its main axis (the axis we'll orient our normal hemisphere towards during encoding) closest to the surface normal.

That choice can then be encoded in a few bits that we can stuff anywhere. The two "spare" bits of the 10-10-10-2 format would work, but we can "steal" some bits from any part of the g-buffer (even maintaining the ability to blend that component, by using the most significant bits, reading them in (as we need for the normal encoding anyways) and outputting that component remaining bits (that need to be blended) while keeping the MSB the same across all decals.

I'll probably share some code later, but a good choice would be to use the bases having one axis oriented as the coordinates of the vertices of a signed unitary cube (i.e. spanning minus to plus one), for which the remaining two tangent axes are easy to derive (won't need to re-orthonormalize based on an up vector as it's usually done).

Finally, note how this encode gives us a representation that plays nicely with hardware blending, but it's also more precise than a two-component 10-10 projection if done right. Each encoding space will in fact take care of a subset of the normals on a sphere, namely all the normals closest to its main axis. 
Thus, we know that the subsequent normal projection to a two component representation have to take care only of a subset of all possible normals (a hemisphere worth of them, for decals to be able to blend, plus the solid angle of the largest "cell" spanned by the tangent spaces on the sphere), and more spaces (and more bits) we have less normals we'll have to encode for each (e.g. with only one extra bit, the TS would divide the sphere in two halves, and we'll need in each to encode a sphere worth of normals to allow decals to blend, which would be a bad idea). 
So, if we engineer the projection to encode only the normals it has to, we can effectively gain precision as we add bits to the space selection. 

Conversely beware of encoding more normals than needed, as that can create creases as you move from one space to another on the sphere, when applying normal maps, as certain points will have more "slack" space to encode normals and some others less, you'll have to make sure you won't end up pushing the normals outside the tangent-space hemisphere at any surface point.

- Conclusions & Further references:

Once again we have a pretty fundamental problem in computer graphics that still is a potential source of so much research, and that still is not really solved.
If we look carefully at all our basic primitives, normals, colours, geometry, textures and so on, there is still a lot we don't know or we do badly, often without realizing the problems.
The two encoding strategies I wrote of are certainly a step forward, but still in 10 bits per components you will see quantization patterns on very smooth materials (sharp highlights, mirror-like reflections are the real test case for this). And none of the mappings is optimal, there are still opportunities to stretch and tweak the encoding to use its bits better. 
Fun times ahead.


A good test case with highly specular materials, flat surfaces, slowly rotating objects/view