Search this blog

17 February, 2013

More rules. On skin rendering

Update: What happened to this article? Many asked me. Well, now I can tell, THIS happened. My work and Jorge's ones are not related at all, but we both worked on skin for a while, and some of stuff on this article strongly related to what he, independently, discovered. As I was aware that his article was to be presented to GDC and totally rocked, as usual, I preferred to push this offline to get a cleaner "launch window" for his work. Which you should read. Now (or as soon as the final slides go online...). Even if you were at GDC13, most of the details and most of his research could not fit into the limits of the live presentation.

---

A year ago or so, I wrote an article out of frustration almost, about skin rendering and its "horrors" in videogames. This is a follow-up to that one. It's been some years that I've been working on characters and skin, in a few games, for a few different companies.
I won't get into any technical detail, partly because I don't want to spill information regarding games I've done in the past, but mostly, because I don't think the specific techniques are very important. 
It's what we know and how much we understand of the skin (or any other rendering element) that makes the difference, once we know the problems, what matters and what not, why things behave a given way, crafting a model for rendering is often not too hard (might be time consuming though) and closely depends on the hardware, the budgets, the art resources and the lighting we have to integrate.

1 - Attention to detail
I already talked about this point to an extent in the previous post, but really, this is the first thing to keep in mind. Model (in the mathematical sense), tune (with your artists, hopefully), compare (with solid references). Rinse and repeat, again and again. What is wrong? Why? 
I could not explain this better than Jorge Jimenez did with his work, presented at Siggraph 2012, an example of how applied graphics R&D should look like. He's a great guy and is doing a great service to our craft. Don't look only at his techniques, but understand the components and the methodology.
An image from Jorge's demo
Actually I didn't even, so far, end up using any of his work directly in a game (-I would try though-, if I was to make another deferred lighting renderer, for a foward I still believe pre-integration, even if it's a pain to manage, has an edge) and there are some details that might be further expanded (for example I believe that for the ears something akin to Colin Barré-Brisebois' method, maybe an extension, could work better)... But his methodology is an example even greater than the excellent implementation he provides. That to me is the real deal.

2 - Get good references
How are you going to have "attention to detail" without references... And you'll need, possibly, different references per rendering component. And I don't mean here hair, eyes, lips (these too, obviously) but diffuse, specular, subsurface... It's very hard to tune rendering looking only at a picture, in order to generate that end image there is a mixture of so many parameters, from shading to lighting to the textures and models, to things like color spaces and tonemapping.
Linear HDR photos are a starting point, but these days, at least decoupling specular from diffuse, at least well enough to be used as a lighting reference, is not hard (doing so for texturing purposes requires much more precision and it's best done with an automated system). 
Acquire references under controlled lighting. Acquire lighting together with your reference under the typical conditions you'll need your characters to be in.
Some of my go-to third-party references:
3 - You'll need tonemapping
If you want to work with real, acquired references, you'll need to understand and apply tonemapping. Skin, if you let your shading just clamp in the sRGB range, looks terribly wrong. Also, detail is completely annihilated, and your artists will mess up the tuning trying to avoid loosing too much color or detail (usually, dialing in an unrealistically low amount of specular, and/or painting a very pale diffuse). Try taking Penner's pre-integrated skin scattering formulation, and see how does it look without tonemapping... 

Sorry Fallout, you're a great game but the easiest reference for bad skin shading.

Note that this is not related to HDR buffers or anything HDR. Even on an old title, without any shader cycles to spare, rendering in a 8bit buffer, at least on the skin, you'll need some tonemapping. White-balanced Reinhard works decently in a pinch. And if you're thinking that it would be "wrong" to tonemap in the shading or so, don't. If you're rendering to an 8bit sRGB buffer, you're outputting colors out of your shader. Not any radiometric quantity, just final colors. Colors are not "right" or "wrong" they can just look right or wrong. Anything that you'll do, is "wrong" anyways, alphablending, postprocessing, whatever. So choose the good-looking "wrong" over the horribly looking one...

4 - Understand your scales
One of the hard parts or modeling shading is to understand at what scale the various light phenomena occur, what happens between them, and how to transition. What is the scale of the BRDF? What is the scale of textures? What of geometry?
Nowadays, especially on specular this is understood, with techniques like Lean and Clean, Toksvig mapping and so on. But that doesn't mean that we apply the right care on all materials. Often I've found that we take formulas and methods that are valid at one scale and apply them to materials which have roughness and features at a scale much different than the scale of the original BRDF. And fail to integrate these features.
For example, skin specular. Which model to use? If you look at skin, close up, it's fairly different from the assumptions of Cook-Torrance.




Again really, pay attention and use references. Look at your models at different magnifications. You'll find interesting things. For example a single KSK lobe is hardly enough to capture specular (regardless of what some papers write). Fresnel terms might need tweaking, and so on.

One-layer KSK on the left versus a multi-layer model
5 - Account for ALL the lighting and ALL the occlusions
This is especially important if you work with Penner's Pre-Integrated scattering, but it's true in general. Do you have analytic lights? They will have a diffuse term, to integrate with subsurface scattering somehow. They will have a specular term. How do shadow them? The shadowing also needs to be part of the subsurface scattering, somehow (Penner provides answer for these already), and it has to occlude specular too. 
Do you have ambient? You'll need the same. Ambient has specular too. Ambient contributes to SSS. And it needs to be occluded. And don't stop at that! Compare, use references and you'll find if you're missing something, or leaking something. 
For example, it's usual to shadow ambient with ambient occlusion, but even in the areas occluded by the AO there is some residual lighting (a hint, generally, you will need to add some redness there as skin bounces on itself and scatters... just multiplying ambient*AO does not work well, as I wrote in the previous article).

Jimenez SSS versus my pre-integrated method. Which is which?
All this work is also why a method like UV-space or screen-space subsurface (like Jimenez method, which is the state of the art right now) is much easier to use, as it can apply SSS on all lighting you put in the scene with a single, consistent method.

OT: The two compact cameras you need to know about

I know that many rendering experts also dab in photography, it's a natural attraction between the two. So, I wanted to write this (hopefully small) off-topic post on the only two compact cameras that, as of today, you should ever consider buying (well, if you're ok to spend $700, otherwise, you might want to stop reading now).

First, a definition. What I mean with "compact" has very rigorous definition: a camera is "compact" if it fits -well- inside my coat's pocket. This is important, because there are many popular "smaller" cameras that are not quite as small, and many other that strive to be even smaller than that.
The reason I think that definition is fundamental, is that I wanted to shop for a camera to -always- bring with me. Anything bigger, I don't carry, anything smaller is nice to have but not important for me (actually, too small cameras I find hard to use).

So again, I will be talking about cameras that you truly want to carry with you all the times, not some smaller "vacation" camera that still needs 2/3 lenses and a bag...

--- Update!
I sold the Fuji. Why? I'm not sure, I loved that camera. But, I sold it at the same money I bought it, and meanwhile the X100S came out (which would also mean my old one would start depreciating).

The X100S is a killer camera, they fixed almost everything... faster, better autofocus (not that I found the old one too bad anyways, but this is significantly better and AF quality on digital cameras makes ALL the difference), more resolution in the EVF, better manual focus (even if I doubt it's really smooth still) and even a better sensor (16 instead of 12 megapixels, and bit better at high-iso too).
Yes, it's still a bit quirky here and there, but really, the complaints now are truly minor, and anyhow NO digital camera out there has a perfect ergonomy in my opinion. Also, the X100 was already really fun with its hybrid viewfinder, the image quality is the best you'll even find in a compact camera (even the huge, expensive full-frame RX1 doesn't fare much better at ISO1600 and lower, and the Leica M9 has an edge at ISO200 but higher than that it plain sucks, compared) so... Must buy?

Not so fast... Turns out the X100S is such a great camera it doesn't sell cheap! The cheaper I've found so far an used (but in "new" condition) one is around 1200$, the older X100 goes for around half that price... Tough call... But there is more!

Sony didn't stand still and made a little thing called RX100-II... Simply unbelievable. Insane. There are no adjectives really. It's the same camera as before, but with even better high-iso capabilities (how much better? more than on stop!), which also brings better autofocus, a flash hotshoe (which also allows for other accessories like strapping on an optical viewfinder, many brands make these) and some crazy other things like built-in WiFi (cool for tethering via an iPad) and NFC (to configure said wifi more easily).

So. What to get? Here is my guide:

  1. Do you really need a smaller-as-possible camera? -> Go for the RX100-II
  2. Mostly night photography? -> Skip to the last step
  3. Do you hate a 35mm lens? (or really, really, really need a zoom) -> Go for the RX100-II
  4. Do you really need high flash sync speeds, flash during the day? -> Skip to the last step
  5. Would your photographic routine be much, much easier if you could tether to an iPad, or make in camera panoramas and HDR? -> Go for the RX100-II
  6. You can't afford more than 700$? -> Go for an used X100, else X100S

All other compacts are miles, miles behind, it's like they're from a completely different generation, the only reason for them to exist is because they're much cheaper.
Even most other smaller systems can't compare, I'd say among the interchangable lens, non-reflex cameras, only the Leica Monochrom, the Olympus OM-D E-M5 and the Sony Nex-7 could compare, quality-wise, to the X100S, and that's really something...
--- End Update

Nex, Micro4/3 and such...
The definition above pretty much excludes the entire Sony Nex system, all the micro-four-thirds and also the Canon M. Even less suitable are the Leica Ms or stuff like the Sony RX1.
These are all great cameras (well, the Canon M is not, from what I hear) but they are not compact enough (trust me, I have a micro-four-thirds system, it doesn't fit my definition of compact). I'd say these are "travel" cameras, based on the fact that I do bring them to my travels, where I might want to arrange a photoshoot but not bring all the studio equipment, but not every day.
If you're shopping for these, I'd probably go for a Panasonic or Olympus Micro-four-thirds. The OM-D E-M5 looks really amazing, supersharp sensor, great form factor... Even if it's not the APS-C sized sensor of the NEX, M43 are often as good IQ-wise, and quite smaller as you can't really make good, compact lenses for the APS-C (Sony has a pancake, the 16mm 2.8, but it's an horrible, horrible lens. The Panasonic G married with its 20mm or 14mm pancakes are a much more balanced fit).

Ok so, let's go on to the two contenders!

Sony RX100 - Best "mini" camera EVER.
If you want a really small camera, stop here. In its size category it's by FAR the best, there is no comparison at all, the gap between this one and for example a Panasonic LX or a Canon G is huge. It's a great, great camera.

Pros:
  • Best performance vs size by far.
  • Incredible, incredible image quality.
  • Great, rational controls. The front dial is almost useless, but I've found that mapping it to the ISO selection works really well. Everything else is really well placed/made.
  • Good optical image stabilization, even if the camera is not easy to hold, 1/15 and even lower, are possible, with a bit of luck.
Cons:
  • For me, it's even too small, the depth is ok, but its width and height measures are really compact and it's not easy to grip with both hands.
  • No viewfinder, which means you're always shooting with the arms extended. I don't mind per se looking through the rear LCD, but then again, it's a less stable grip.
  • Silly zoom lens. It's actually great, but it goes f1.8 only at 28mm (equivalent) which is understandable, but I would have preferred a 35 or 40mm fixed.
As you can see, there isn't much to say about this one. If you need/like the size, buy it. You don't have really any option there....

Sony RX100, cropped from out of camera JPEG. In daylight, it's incredibly sharp. At night, it's surprisingly good.

Fuji X100 - A great camera ruined by a BAD firmware.
There is lots of talk over the net about this camera. It's quite unique, with a big APS-C sensor in such a compact package, a 35mm f2 fixed lens, and a unique optical viewfinder with projected LCD indicators and the ability to switch to a fully electronic viewfinder with a lever.
No review on the internet will avoid talking about its many defects, I'll list the ones I've experienced too. But you should not worry. It's a fun camera, you will love shooting with it. It works well, it won't piss you off. And, it has a GREAT image quality. And that's all that really, really matters.

Pros:
  • Looks and feels great in hand. It's a proper camera, made for manual control (even if the third-stop overrides are silly... but still).
  • The hybrid viewfinder is awesome. The electronic one is weak, but you won't really need it, I use only to review the shots (you can set an auto-review period, it switches back and forth fast enough)
  • The APS-C sensor is the best you can buy in a camera that still fits in a coat's pocket well (that means, it's slightly inferior to the best Nex, but better than anything else).
  • The 35mm f.2 is great (albeit with its quirks), and does not extend or rotate during focus.
  • The flash is tasteful in its default setting.
Cons:
  • No optical image stabilization. I've hand-held it down to 1/15 easily though, due to the good grip and little vibrations.
  • The firmware looks and feels like it was made 10 years ago. Slow and ugly and stupid.
  • No flash exposure compensation button! Thanks lord by default the flash does a proper fill.
  • The lens has some quirks, a bit soft when focused near, flares are weird. The lens cover is not built-in.
  • No charge through USB. And the charger it's fairly bad too.
  • Bad exposure/dof preview, useless really.
  • Slow RAW mode. You'll need a fast SD card and some patience, not a camera to shoot action sports...
  • Not the fastest AF around. But trust me, it's fast enough and reliable enough even at dark. I've seen worse (i.e. very fast lenses on reflex cameras, my 85mm 1.2 on the 5Dmk2 is fucking annoying, the x100 won't really bother you unless you're shooting something that moves really fast). Also the manual focus mode is a joke.
  • Some design mistakes, i.e., the battery fits in the slot even if inserted wrong. The startup times can be incredibly long if you don't format the SD from the camera etc...
  • Silly expensive.
Really, the biggest issue with the X100 is its price. As a photographic tool, it works fine. But as a $1400 piece of equipment, its quirks are intolerable. Never, ever, never ever, buy this one new. Luckily, people know it's a camera way to expensive for its issues, and it goes used easily around $700, in great condition and with some accessories too. I bought mine that way, and I saw many similar offers. At that price, it's really, really compelling.

As for the one before, this from the X100, is an out of camera JPEG. And useless.

12 February, 2013

HDR Workflows

Presentation time! This one started during my last stay at EA, I never completed it before now, so I couldn't present it live. As always, it has loads of notes and references, and in the current state is meant more as notes than a proper presentation.

Hopefully, the slide deck does some good explaining why, really, we need tonemapping (-when- we need it),  reminds everybody to help artists validate their work from proper references, and shows a (common) way to split brightness from color, an implementation of what Photoshop calls "vibrance" and some other little tricks.

As far as tonemapping operators, it derives a few curves, one of which is fairly similar (but worse) to Mike Day's recent work. Use his instead :)

10 February, 2013

Color blindness and videogames

After PC Gamers published this article on color blindness and games, I was curious to see what could be done and how, to target better the 5% (8% among males) of gamers affected with this deficiency. The original article didn't link any research or implementation hint but came with a note saying that it should be trivial to do... As most games nowadays do some form of color correction, often via volume color textures, you would think it's not hard to bake a global color transform that maps better the RGB space into what can be seen by colorblind persons.

Well, indeed, it is. Most papers seem to reference as a starting point a project report by Fidaner, Lin and Ozguven, "Analysis of Color Blindness" which derives a simple linear transform by going from RGB to LMS color space, simulating color blindness by losing one of the receptors in the LMS space, computing the difference between the two images and feeding back this difference by adding colors that can be perceived instead.
The algorithm is so simple it's easier to read the paper than my summary of it. Past this simple mapping, all the research I've found improve on the mapping by adjusting for the characteristic of the image you need to convey, which is not only more expensive but also could be not ideal in our case, as the image contents are changing frame to frame. I wonder if a nonlinear transform could improve the situation, but I haven't found much about static, global color transforms other than the aforementioned work.

Further notes:
  1. The website daltonize.org has implementations of the linear transform in many languages.
  2. Some papers seem to do the RGB->LMS (and viceversa) conversion in gamma space (including the Analysis of Color Blindness one), while some other don't. The confusion I guess comes from the fact that there are many RGB spaces, and not only sRGB with its gamma 2.2ish transfer function. From what Wikipedia says, CIEXYZ to LMS is a linear transform, and keeping in mind that sRGB to CIEXYZ is not, we have to gamma/degamma. I've also found this paper (comes with sourcecode) which makes conversion between sRGB to LMS an even less trivial matter.
  3. There are some variants of the original daltonization algorithm. In particular, this paper, proposes (among other things which are less relevant) the use of a modified error matrix (formula n.5).
  4. If you wanted to spare some GPU cycles, it's possible to feedback the error term computed by the daltonization in other post-effects, to locally enhance contrast of areas between two similar colors.
    1. This paper illustrates the concept.
    2. You could feedback into an existing, suitable effect (i.e. bloom)
    3. You could trade off a post/processing step for this, for example, remove DOF or motion blur and add an "unsharp mask" filter guided by the error. A way of doing this is to compute an unsharp mask, in a single pass, for both the regular and color-blind simulated colors, and then feedback the error (contrast loss) in the image.
  5. A color-blind simulation mode could at the very least, help UI designers with their job.

09 February, 2013

Tuning: Two-axis controllers, OSC and multitouch.

So far, and even in the near future I'd say, game visuals, even with physically-based (or inspired... or "reasonable") shading, have been a matter of tweaking and tuning, both by the artists and by the programmers, functions and parameters. Thus, the speed and intuitiveness at which this tuning, iteration happens, has a crucial, direct impact on the end results.

Now, if you've been following this blog for a bit, you might have noticed that I'm a strong proponent of live-coding, hot-swapping and all manners of ways to transform our art into a direct feedback mode of manipulation.
So I was very curious when I had, two weeks ago, the chance to attend talks by two of the forefront proponents and innovators in this field, the designer-coder-instructor-inventor Bret Victor and Pixar's and demoscene's manipulator of functions extraordinaire, Inigo Quilez.

I won't go into the details of these, but one thing struck me that I thought was worth further research and sharing: IQ's use of OSC to connect hardware midi (well, OSC) controllers to shader parameters in his demo/livecoding/vj system. Of course, such things make perfect sense when you're VJing in a club, and I've seen even in our industry some experiments with midi devices to control a variety of things in a game, honestly my experience relegated them as being most often nothing but a gimmick. Inigo, begin the smart guy he is, also came to a similar conclusion, with an exception though, when it comes to tune two or more correlated parameters, they have their space.

Of course! All of a sudden I feel so stupid having had to explain so many times all kind of multiply/add controls to artists! MADD is the shader coder best friend, it's cheap, it's powerful, it ends up everywhere. But for the artist, even when you "convert" it into the often more friendly add-multiply (just postprocess the data out of the tuning tool), and using the right bounds, constraints and so on, it often comes to be a pain to manage. I've always had these controls bound to two separate sliders, which is a really stupid idea, on a non-multitouch device (or without physical independent sliders).
Of course, you could just use a two-axis pad and control it with the mouse, the solution is quite obvious, honestly, that's why you should have a human interface designer in the team, not only for the game UI, but for tools, for in-game data display and so on. But I digress...

I went on and did some research on OSC and devices. There are too many things! I'd say, a good starting point is to take a look at a client for multitouch devices (iSomething, Android or whatever). These are nice to play with because they are easily configurable and support bi-directional communications (which some hardware devices implement, often at a premium price).

Even more interestingly, some applications, like the open-source Mrmr and Control (whose development, unfortunately, don't seem to be very active nowadays) come with support for dynamic configuration and scripting, so your game or tool can send messages to change the layout of the controller, which basically ends up to be a remote, programmable GUI - per se an awesome thing to have. Control is essentially a HTML5/JS GUI connected to OSC, with OSC able to feedback into the JS to manipulate the GUI remotely!

Image from the TouchOSC website
Now of course, all of this could be easily done even in whatever custom tuning environment you already have, in the end, nothing of this is rocket science, remote displays for GUIs existed twenty years ago, and probably you already have a decent collection of widgets, maybe some good colour pickers, some axis/angle selectors, sliders and so on.

The game-changer here in my mind is the multitouch support though, which enables a category of manipulators impossible on a PC alone.
Also once you support OSC, you automatically will be able to use any kind of OSC device, driver, or aggregator. A joystick as input? A webcam? Sound? Wiimote? A sequencer (see this, it's pretty crazy)? Custom hardware? You name an input method and there is probably a Midi or Osc interface for it, and pretty much any programming language will have bindings, and most libraries for creative coding, like the multiplatform OpenFrameworks.

Hardware is surprisingly not too expensive either, you can get something basic like a Korg Nanocontrol for sixty bucks, and even fancier interfaces like the QuNeo, retail for around two hundred. A fun world.