Search this blog

07 May, 2013

Hey! I was still using that

I'm working on a lot of things and this blog has been a bit paying the price, I'll write something serious "soon", but for now, you get this...

Don't install KB2670838. I guess everybody knows, it breaks the old Pix for Windows, and you won't even be able to replay old captures with it. True, we have a new Pix now (graphics debugger) in VS2012 and the old one was well... old. But, I don't care, and I bet not many people do, as the new debugger, for now, is much slower than the old one and more verbose, two very bad things when you have to analyze thousands of draw calls. 

Moreover, and this is the point, even if it was equally capable and just "different", that's enough not to kill the old one. People are resistant to change, change alone has a negative impact, see how much turmoil there is for each Facebook update (or iOS firmware update... all of which seem to drain your battery more, if you look at the forums...). So you'd better have a pretty good reason for it.

 I don't care about learning a new tool if the old one worked as well or in this case, better. I don't care either, as a user, about the perfectly reasonable motivations you had to invest in this new one.

Now, this is an example (I could have written the same about Apple and the Maps debacle, I didn't update there either), and I'm sure Microsoft doesn't really care much about PC/Dx11 anymore, and it's not making a ton of money on that... People are even going back to OpenGL these days.

Intel is the only company right now that seem to strongly invest in PC graphics, with tools, R&D, demos, lots of activity... GPA is the best debugger today, but I still like Pix, especially on DX10/11 is faster, and navigating it works still better than GPA's erp selection stuff.

P.S. If you're using VS2012 and you want to capture with Pix for Windows, remember you have to switch your libraries from the Window 8 SDK back to the June 2010 DX SDK ones.

01 April, 2013

Space Marine did it first!

You know, I don't usually post links to news or so, but all the guys behind Space Marine worked so hard and were so amazing I have to do this shameless plug. I think you get more attached to a product when people work really their ass off, and they are super smart, and in the end, sales are not great... Oh well...

Nowadays a few people are doing "medium range" ambient occlusion using top-down projected and blurred depth buffers. No one credited Space Marine and I think very honestly, as we didn't publish much at all on it. Still, it might be worth a second look at the slides I've pushed, as SM's technique still has a few tricks that I didn't see in the others I've seen around so far, with titling to keep the update times small and depth peeling to handle interiors and areas with multiple heights.

Shadowmaps and cascades rant/thoughts...
On a only slightly partially related note, and to add some "novel" content to this post, I was wondering for a bit about shadowmaps. We tried a couple of ways couple of ways of caching them, but in SM they failed.
Simply updating some cascades every other frame didn't work with self-occlusions of dynamic objects, and re-rendering dynamics (and more advanced methods) failed because the bandwidth required to move shadowmaps around was huge on 360/ps3.

What I don't remember anymore is if we tried to solve the problem of the self-shadowing by accessing the cascades of the dynamic objects using the position they had at the previous frame (when the cascade was computed). My memory is very bad (that's partially why I keep this blog...), I'll have to ask my then coworkers about this. If we didn't try, I was dumb. If we did, I wonder why it failed. Food for thought, maybe I'll post an update on this later on. As far as I gathered, Crytek didn't do this in their every other frame update on Crysis 2.

Update: I see the catch. Space Marine did "splat" the shadows in screen space, for good reasons. And if you do so, you reconstruct the position of the objects to be shadowed using the current frame depth buffer from a depth prepass (in our case, from the GBuffer pass, being a deferred renderer), there is no easy way to implement this.
There are ways, like stenciling and using a MRT containing last frame's world position... which could have been later used for motion blur vectors (which we did compute), so it's not crazy even in that scenario, but I'm quite sure now we didn't try all this, for how bad my memory is I would have remembered such a large change :)

Bonus hint: always point your SSAO towards the sky...

17 February, 2013

More rules. On skin rendering

Update: What happened to this article? Many asked me. Well, now I can tell, THIS happened. My work and Jorge's ones are not related at all, but we both worked on skin for a while, and some of stuff on this article strongly related to what he, independently, discovered. As I was aware that his article was to be presented to GDC and totally rocked, as usual, I preferred to push this offline to get a cleaner "launch window" for his work. Which you should read. Now (or as soon as the final slides go online...). Even if you were at GDC13, most of the details and most of his research could not fit into the limits of the live presentation.

---

A year ago or so, I wrote an article out of frustration almost, about skin rendering and its "horrors" in videogames. This is a follow-up to that one. It's been some years that I've been working on characters and skin, in a few games, for a few different companies.
I won't get into any technical detail, partly because I don't want to spill information regarding games I've done in the past, but mostly, because I don't think the specific techniques are very important. 
It's what we know and how much we understand of the skin (or any other rendering element) that makes the difference, once we know the problems, what matters and what not, why things behave a given way, crafting a model for rendering is often not too hard (might be time consuming though) and closely depends on the hardware, the budgets, the art resources and the lighting we have to integrate.

1 - Attention to detail
I already talked about this point to an extent in the previous post, but really, this is the first thing to keep in mind. Model (in the mathematical sense), tune (with your artists, hopefully), compare (with solid references). Rinse and repeat, again and again. What is wrong? Why? 
I could not explain this better than Jorge Jimenez did with his work, presented at Siggraph 2012, an example of how applied graphics R&D should look like. He's a great guy and is doing a great service to our craft. Don't look only at his techniques, but understand the components and the methodology.
An image from Jorge's demo
Actually I didn't even, so far, end up using any of his work directly in a game (-I would try though-, if I was to make another deferred lighting renderer, for a foward I still believe pre-integration, even if it's a pain to manage, has an edge) and there are some details that might be further expanded (for example I believe that for the ears something akin to Colin Barré-Brisebois' method, maybe an extension, could work better)... But his methodology is an example even greater than the excellent implementation he provides. That to me is the real deal.

2 - Get good references
How are you going to have "attention to detail" without references... And you'll need, possibly, different references per rendering component. And I don't mean here hair, eyes, lips (these too, obviously) but diffuse, specular, subsurface... It's very hard to tune rendering looking only at a picture, in order to generate that end image there is a mixture of so many parameters, from shading to lighting to the textures and models, to things like color spaces and tonemapping.
Linear HDR photos are a starting point, but these days, at least decoupling specular from diffuse, at least well enough to be used as a lighting reference, is not hard (doing so for texturing purposes requires much more precision and it's best done with an automated system). 
Acquire references under controlled lighting. Acquire lighting together with your reference under the typical conditions you'll need your characters to be in.
Some of my go-to third-party references:
3 - You'll need tonemapping
If you want to work with real, acquired references, you'll need to understand and apply tonemapping. Skin, if you let your shading just clamp in the sRGB range, looks terribly wrong. Also, detail is completely annihilated, and your artists will mess up the tuning trying to avoid loosing too much color or detail (usually, dialing in an unrealistically low amount of specular, and/or painting a very pale diffuse). Try taking Penner's pre-integrated skin scattering formulation, and see how does it look without tonemapping... 

Sorry Fallout, you're a great game but the easiest reference for bad skin shading.

Note that this is not related to HDR buffers or anything HDR. Even on an old title, without any shader cycles to spare, rendering in a 8bit buffer, at least on the skin, you'll need some tonemapping. White-balanced Reinhard works decently in a pinch. And if you're thinking that it would be "wrong" to tonemap in the shading or so, don't. If you're rendering to an 8bit sRGB buffer, you're outputting colors out of your shader. Not any radiometric quantity, just final colors. Colors are not "right" or "wrong" they can just look right or wrong. Anything that you'll do, is "wrong" anyways, alphablending, postprocessing, whatever. So choose the good-looking "wrong" over the horribly looking one...

4 - Understand your scales
One of the hard parts or modeling shading is to understand at what scale the various light phenomena occur, what happens between them, and how to transition. What is the scale of the BRDF? What is the scale of textures? What of geometry?
Nowadays, especially on specular this is understood, with techniques like Lean and Clean, Toksvig mapping and so on. But that doesn't mean that we apply the right care on all materials. Often I've found that we take formulas and methods that are valid at one scale and apply them to materials which have roughness and features at a scale much different than the scale of the original BRDF. And fail to integrate these features.
For example, skin specular. Which model to use? If you look at skin, close up, it's fairly different from the assumptions of Cook-Torrance.




Again really, pay attention and use references. Look at your models at different magnifications. You'll find interesting things. For example a single KSK lobe is hardly enough to capture specular (regardless of what some papers write). Fresnel terms might need tweaking, and so on.

One-layer KSK on the left versus a multi-layer model
5 - Account for ALL the lighting and ALL the occlusions
This is especially important if you work with Penner's Pre-Integrated scattering, but it's true in general. Do you have analytic lights? They will have a diffuse term, to integrate with subsurface scattering somehow. They will have a specular term. How do shadow them? The shadowing also needs to be part of the subsurface scattering, somehow (Penner provides answer for these already), and it has to occlude specular too. 
Do you have ambient? You'll need the same. Ambient has specular too. Ambient contributes to SSS. And it needs to be occluded. And don't stop at that! Compare, use references and you'll find if you're missing something, or leaking something. 
For example, it's usual to shadow ambient with ambient occlusion, but even in the areas occluded by the AO there is some residual lighting (a hint, generally, you will need to add some redness there as skin bounces on itself and scatters... just multiplying ambient*AO does not work well, as I wrote in the previous article).

Jimenez SSS versus my pre-integrated method. Which is which?
All this work is also why a method like UV-space or screen-space subsurface (like Jimenez method, which is the state of the art right now) is much easier to use, as it can apply SSS on all lighting you put in the scene with a single, consistent method.

OT: The two compact cameras you need to know about

I know that many rendering experts also dab in photography, it's a natural attraction between the two. So, I wanted to write this (hopefully small) off-topic post on the only two compact cameras that, as of today, you should ever consider buying (well, if you're ok to spend $700, otherwise, you might want to stop reading now).

First, a definition. What I mean with "compact" has very rigorous definition: a camera is "compact" if it fits -well- inside my coat's pocket. This is important, because there are many popular "smaller" cameras that are not quite as small, and many other that strive to be even smaller than that.
The reason I think that definition is fundamental, is that I wanted to shop for a camera to -always- bring with me. Anything bigger, I don't carry, anything smaller is nice to have but not important for me (actually, too small cameras I find hard to use).

So again, I will be talking about cameras that you truly want to carry with you all the times, not some smaller "vacation" camera that still needs 2/3 lenses and a bag...

--- Update!
I sold the Fuji. Why? I'm not sure, I loved that camera. But, I sold it at the same money I bought it, and meanwhile the X100S came out (which would also mean my old one would start depreciating).

The X100S is a killer camera, they fixed almost everything... faster, better autofocus (not that I found the old one too bad anyways, but this is significantly better and AF quality on digital cameras makes ALL the difference), more resolution in the EVF, better manual focus (even if I doubt it's really smooth still) and even a better sensor (16 instead of 12 megapixels, and bit better at high-iso too).
Yes, it's still a bit quirky here and there, but really, the complaints now are truly minor, and anyhow NO digital camera out there has a perfect ergonomy in my opinion. Also, the X100 was already really fun with its hybrid viewfinder, the image quality is the best you'll even find in a compact camera (even the huge, expensive full-frame RX1 doesn't fare much better at ISO1600 and lower, and the Leica M9 has an edge at ISO200 but higher than that it plain sucks, compared) so... Must buy?

Not so fast... Turns out the X100S is such a great camera it doesn't sell cheap! The cheaper I've found so far an used (but in "new" condition) one is around 1200$, the older X100 goes for around half that price... Tough call... But there is more!

Sony didn't stand still and made a little thing called RX100-II... Simply unbelievable. Insane. There are no adjectives really. It's the same camera as before, but with even better high-iso capabilities (how much better? more than on stop!), which also brings better autofocus, a flash hotshoe (which also allows for other accessories like strapping on an optical viewfinder, many brands make these) and some crazy other things like built-in WiFi (cool for tethering via an iPad) and NFC (to configure said wifi more easily).

So. What to get? Here is my guide:

  1. Do you really need a smaller-as-possible camera? -> Go for the RX100-II
  2. Mostly night photography? -> Skip to the last step
  3. Do you hate a 35mm lens? (or really, really, really need a zoom) -> Go for the RX100-II
  4. Do you really need high flash sync speeds, flash during the day? -> Skip to the last step
  5. Would your photographic routine be much, much easier if you could tether to an iPad, or make in camera panoramas and HDR? -> Go for the RX100-II
  6. You can't afford more than 700$? -> Go for an used X100, else X100S

All other compacts are miles, miles behind, it's like they're from a completely different generation, the only reason for them to exist is because they're much cheaper.
Even most other smaller systems can't compare, I'd say among the interchangable lens, non-reflex cameras, only the Leica Monochrom, the Olympus OM-D E-M5 and the Sony Nex-7 could compare, quality-wise, to the X100S, and that's really something...
--- End Update

Nex, Micro4/3 and such...
The definition above pretty much excludes the entire Sony Nex system, all the micro-four-thirds and also the Canon M. Even less suitable are the Leica Ms or stuff like the Sony RX1.
These are all great cameras (well, the Canon M is not, from what I hear) but they are not compact enough (trust me, I have a micro-four-thirds system, it doesn't fit my definition of compact). I'd say these are "travel" cameras, based on the fact that I do bring them to my travels, where I might want to arrange a photoshoot but not bring all the studio equipment, but not every day.
If you're shopping for these, I'd probably go for a Panasonic or Olympus Micro-four-thirds. The OM-D E-M5 looks really amazing, supersharp sensor, great form factor... Even if it's not the APS-C sized sensor of the NEX, M43 are often as good IQ-wise, and quite smaller as you can't really make good, compact lenses for the APS-C (Sony has a pancake, the 16mm 2.8, but it's an horrible, horrible lens. The Panasonic G married with its 20mm or 14mm pancakes are a much more balanced fit).

Ok so, let's go on to the two contenders!

Sony RX100 - Best "mini" camera EVER.
If you want a really small camera, stop here. In its size category it's by FAR the best, there is no comparison at all, the gap between this one and for example a Panasonic LX or a Canon G is huge. It's a great, great camera.

Pros:
  • Best performance vs size by far.
  • Incredible, incredible image quality.
  • Great, rational controls. The front dial is almost useless, but I've found that mapping it to the ISO selection works really well. Everything else is really well placed/made.
  • Good optical image stabilization, even if the camera is not easy to hold, 1/15 and even lower, are possible, with a bit of luck.
Cons:
  • For me, it's even too small, the depth is ok, but its width and height measures are really compact and it's not easy to grip with both hands.
  • No viewfinder, which means you're always shooting with the arms extended. I don't mind per se looking through the rear LCD, but then again, it's a less stable grip.
  • Silly zoom lens. It's actually great, but it goes f1.8 only at 28mm (equivalent) which is understandable, but I would have preferred a 35 or 40mm fixed.
As you can see, there isn't much to say about this one. If you need/like the size, buy it. You don't have really any option there....

Sony RX100, cropped from out of camera JPEG. In daylight, it's incredibly sharp. At night, it's surprisingly good.

Fuji X100 - A great camera ruined by a BAD firmware.
There is lots of talk over the net about this camera. It's quite unique, with a big APS-C sensor in such a compact package, a 35mm f2 fixed lens, and a unique optical viewfinder with projected LCD indicators and the ability to switch to a fully electronic viewfinder with a lever.
No review on the internet will avoid talking about its many defects, I'll list the ones I've experienced too. But you should not worry. It's a fun camera, you will love shooting with it. It works well, it won't piss you off. And, it has a GREAT image quality. And that's all that really, really matters.

Pros:
  • Looks and feels great in hand. It's a proper camera, made for manual control (even if the third-stop overrides are silly... but still).
  • The hybrid viewfinder is awesome. The electronic one is weak, but you won't really need it, I use only to review the shots (you can set an auto-review period, it switches back and forth fast enough)
  • The APS-C sensor is the best you can buy in a camera that still fits in a coat's pocket well (that means, it's slightly inferior to the best Nex, but better than anything else).
  • The 35mm f.2 is great (albeit with its quirks), and does not extend or rotate during focus.
  • The flash is tasteful in its default setting.
Cons:
  • No optical image stabilization. I've hand-held it down to 1/15 easily though, due to the good grip and little vibrations.
  • The firmware looks and feels like it was made 10 years ago. Slow and ugly and stupid.
  • No flash exposure compensation button! Thanks lord by default the flash does a proper fill.
  • The lens has some quirks, a bit soft when focused near, flares are weird. The lens cover is not built-in.
  • No charge through USB. And the charger it's fairly bad too.
  • Bad exposure/dof preview, useless really.
  • Slow RAW mode. You'll need a fast SD card and some patience, not a camera to shoot action sports...
  • Not the fastest AF around. But trust me, it's fast enough and reliable enough even at dark. I've seen worse (i.e. very fast lenses on reflex cameras, my 85mm 1.2 on the 5Dmk2 is fucking annoying, the x100 won't really bother you unless you're shooting something that moves really fast). Also the manual focus mode is a joke.
  • Some design mistakes, i.e., the battery fits in the slot even if inserted wrong. The startup times can be incredibly long if you don't format the SD from the camera etc...
  • Silly expensive.
Really, the biggest issue with the X100 is its price. As a photographic tool, it works fine. But as a $1400 piece of equipment, its quirks are intolerable. Never, ever, never ever, buy this one new. Luckily, people know it's a camera way to expensive for its issues, and it goes used easily around $700, in great condition and with some accessories too. I bought mine that way, and I saw many similar offers. At that price, it's really, really compelling.

As for the one before, this from the X100, is an out of camera JPEG. And useless.

12 February, 2013

HDR Workflows

Presentation time! This one started during my last stay at EA, I never completed it before now, so I couldn't present it live. As always, it has loads of notes and references, and in the current state is meant more as notes than a proper presentation.

Hopefully, the slide deck does some good explaining why, really, we need tonemapping (-when- we need it),  reminds everybody to help artists validate their work from proper references, and shows a (common) way to split brightness from color, an implementation of what Photoshop calls "vibrance" and some other little tricks.

As far as tonemapping operators, it derives a few curves, one of which is fairly similar (but worse) to Mike Day's recent work. Use his instead :)