Search this blog

17 March, 2012

Other tools that I use...

Most of the blog posts here are made for a selfish reason, to remind me of things I would quickly forget otherwise, it's really a personal diary more than anything else...

Over the years I've made a few posts which help me every time, like in the past month, I start a new job or have to setup a new computer, and I even silently update them time to time.

Certainly these fit in said category:
http://c0de517e.blogspot.ca/2011/04/2011-tools-that-i-use.html -- this is the only one I (try) to keep up to date!

Now I'd like, and probably it's going to be the last piece of this puzzle, to write down some of the remaining software stuff I found important to do my job.
This is mostly about the tools I use on my iPad, on my Samsung Galaxy (Android) and on the cloud...
  • Dropbox. Easy and really really important to me. I use it both for photography and coding, it's available on Mac, Pc, Android, iOS and Web, so it covers everything. I particularly love the Android integration which allows me to snap photos of notes and things I want to remember and then upload them directly into my dropbox account. Essential!
  • Wunderlist. Another cross-everything tool, it replaced the non-cloud tools I had on iOS
    • I prefer it to Remember The Milk because even if the latter is probably more powerful, it's free account it too restricted for me, and the paid one a bit too expensive.
    • Many people swear by Evernote. Maybe one day, I like a lot keeping most of my stuff on Dropbox today...
  • ...speaking of which PlainText on iOS is a neat text-editor that syncs into a folder in your dropbox account. There are a number of similar editors nowadays, even with Markdown functionalities, I don't end up using it very often anyways.
    • For handwriting Bamboo Paper. Best natural writing app I've found, I think it feels better than Penultimate which is very good too. Paper is nice to make diagrams and sketches look pretty easily.
    • One day I'll buy a Jot Touch, I like the Jot pens, but if you use an antiglare screen (as I used to, PowerSupport seems to be the best) then it might scratch it...
  • Reeder for iOS was my choice for google reader offline reading, but now google reader is dead and the iPad version of Reeder seems dead too. My solution: MrReader and Feedly
  • iAnnotate PDF. Best PDF reader for iOS that I've found
  • VLC (videolan) player for iOS. Has been pulled from the store over a petty argument about its licensing, but now it's back!
  • ReadItLater (now called Pocket), both the iOS client and I have bookmarklets on my various browsers. I use it also as an offline reading tool, especially when I travel I upload all the travel guides I want (i.e. wikitravel) and then cache them on the iOS client...
  • ZooTool for bookmarks (which for me is bookmark sharing, I don't really care about the bookmark library myself, I just figured that I do a lot of work to read feeds and other stuff on the web so I can create and share such a list for others to benefit...). I'm shying away from this as I use Twitter more and more
  • Just started trying Spreed, which is a cool web based speed reader with a nice bookmarklet.
  • iTunesU/Coursera/and similar
  • http://www.rainymood.com/ rarely, as I don't mind the noise and I often find background chatter to be quite nice, I listen to a lot of Italian news and politics at work :)
  • http://sleepyti.me/
Other stuff that I use that are not really work related:
  • iOS: Zinio (magazine newsstand), Fancy (cool stuff), SpyderGallery (as I have a Spyder color calibrator), Air Video (great to share videos from my iMac to the iPad), Photosmith (at a given point I wanted to write something exactly like that, to allow selection and rating which is a tedious process, of Lightroom photo collections), Daytum (personal logging)
  • Android: WhatsApp (messaging), Glympse (rarely used) and Poynt (rarely used...), HDR Camera (half-decent, the HDR merging is good but the alignment is really cheap and looses sharpness)
  • Cloud: Yelp, to decide where/what to eat, PreyProject.com to protect my macbook and my Android stuff, LogMeIn for remoting
  • Physical world: I write on paper. Yes, I've tried iPad and styluses and gloves and palm rejection and everything. It's terrible, and thinking otherwise is just a delusion caused by the fact we love our gadgets. It's many orders of magnitudes worse. And if you need a digital copy, just take a picture of the page with a cellphone. Now, that said, which pen and which paper, that's an interesting thing.
    • An A5-A6 spiral bound notebook. It's important to be spiral bound with an hardcover, if you're writing while commuting and so on you need a hard surface, the spiral bound books allow only the page you're writing on to be facing you and with the cover on the back they are the best in terms of stability. Field notes, Whitelines or the classic Rhodia.
    • Pencil, I use a KuruToga because it's cool with Uniball NanoDia leads (fairly soft).
    • Pens. Too many, I collect fountain ones and buy too many other writing instruments in general, from brush pens to very fine ballpoints to graphite and brushes... For an everyday fountain surely the best choice is a Lamy Safari. You want to couple it with a very smooth flowing ink like the Aurora black.
What iOS/Android/Cloud tools do you love? Suggestions?

DirectX9 vs depth resolve

GDC 2012 is over and I bet there are a lot of interesting new techniques to discuss and analyze. I didn't start this work yet (actually I still have to catch up with a few other conferences first, from Siggraph Asia to Eurographics I've not been doing my homework much lately) but what I love when I read papers is not really the application of a given concept (which in scientific papers is often presented in such a biased way that is hard even for experts to really understand the merits of a given implementation), but to find new ideas that can spark new applications.

One such paper was for sure the Variance Shadow Map paper by Donnelly and Lauritzen, which thought statistically illiterate people like me (I know probability, but I'm really a novice when it comes to statistics) a couple of things about means, averages and how these can be used in computer graphics.

After reading that paper a few notions should stick in your mind. The average over a comparison versus comparison over average is one, which applies directly to occlusion, but more in general, that you can summarize a population of samples in a few statistics, which are nicely additive.

If that is the case, then you can apply what you learned in other contexts, with varying degrees of success. And this brings us to the lame technique of the month... As you might know, DirectX 9 is a bitch when it comes to depth buffer resolves. Never-mind reading MSAA depth samples, even accessing depth information is pretty much hopeless unless you want to do hardware PCF shadows. So you end up writing to a colour R32F target your depth, and as you don't have MSAA sample access even in that case, if you want to do some depth-aware effects like SSAO or soft-particles, you're in a lot of pain.

Assuming you don't want to pay the ridiculous price of having to do a depth-prepass, with an extra R32F rendertarget, to then throw it away and not be able to use it as a early-depth priming for your next MSAA passes, your depth samples will be summed and averaged, while what you really would like is to compute a min-max buffer or something like that.

Something like that... We have average... We want to know something about these averaged samples... Mhm... Variance to the rescue? Let's try... This is a small test I did in a few hours, so it's not going to be cool. The whole point of this post is not to show a cool implementation but to stress that you should learn meta-techniques, not stress on implementations...

So here it is. I had a grayscale scene (shading = depth because I'm lazy) and on top of it I laid down two "depth fog" passes with a very very sharp falloff (the green and red thingies). That's because a sharp function applied to the depth is what you want to stress the worst case of fetching the average depth at object boundaries instead of the correct one.

This is what it looks accessing the MSAA R32F depth, averaged:

Notice the horrible edges...

This is how it looks by computing mean and variance in a 16bit ARGB buffer, and then offsetting the depth used in the fog computation by some function of the variance. This is to "emulate" a min-max buffer, where we choose only one of the two endpoints, which works for many effects (i.e. preferring the foreground).
Better, even in the jpeg compressed image... Even better, if we compute out of the variance two depths, then compute the fog with both endpoints, and average, as it would be more correct even having a full min-max buffer...
Detail... Good enough :)

06 February, 2012

Leaving Relic

So, time has come again, and today was my last day at Relic Entertainment. As I did when I left EA, I want to write a bit about my experience with the Relicans and why I left (but not where I'm going, it's not a big secret but I always keep that from the public of this blog to avoid spam in the comments).
Relic and EA(C) are almost polar opposites. The latter latter being the world biggest videogame studio, shipping iterations every year or less (Fifa...), relies on a very refined (even if still suprisingly flexible) production methodology, while Relic still feels like an Indie studio even after the acquisition by THQ. Relic works by distilling talent into games, it's really about people over processes.
One of the kitchens, with free pop. Donuts on Friday, I'm too fat...
And oh boy if they do have plenty of talent... It was truly a pleasure working among them!
By the way, I don't mean that the folks at EA are not bright, quite the opposite, it takes a lot of genius to make NHL, Fifa, Fight Night and all the other franchises there at the studio, but it's a rather different situation, it's a matter of priorities I guess. Working at Relic feels less constrained, sometimes scaringly so. It's a more hands on approach where great hackers can make a bigger difference, and they do.
Space Marine
I should have started my job at Relic on a title which is still unannounced today, but as it often happens things change from the interview to your first day in the office and I was immediately reassigned to Space Marines, and I stayed there until it shipped. And with my help it become the third worst game (in terms of metacritic) of Relic's history (actually, the worst game if we look at the PS3 metacritic only)...
What happened? 

I wouldn't know, really. I'm a gamer and a technician. I enjoy games and do everything that is required to make them visually beautiful, and that's all. I've seen too many people who are not experienced in my own craft saying so much... inaccurate things (and I was among them ten-ish years ago) about it that I don't really love talking about fields which are not my own.
But... If I have to tell you my own perspective... Nothing really happened. It went they way it probably had to. It was the company's first big console title, it was the company's first third-persons action shooter, the first multiplatform game. It started from zero and it was pitted against one of the biggest franchises ever (and I'm guessing here but I'd bet... spending less money to make it, and much less to market it for sure). Moreover, it intended to be a Hollywood production made by people who were really well known for their indie-art titles.

There were mistakes, sure. There was plenty of overtime.... Could it really have ended much differently? With the same people, the same culture, the same resources and amount of support I would say not. 

An area were part of the SM team was that has not been reassigned yet
You can see Relic behind Space Marine... I'm not a fan of the genre but I can tell that the core gameplay is incredibly well done (and remember, company's first shooter!). I had fun with the multiplayer, and I don't play multiplayer! And the rendering is much more accomplished than it seems, and its consistency really surprised me. The amount of features these geniuses managed to cram in the development cycle of something that basically started from scratch is amazing. What it lacks its the production, is the depth and breadth.
The table-tennis room. No day went by without a 2vs2 match
Anyhow it doesn't really matter, and surely what I write on the argument does not matter. I'm sure that Relic will make new and really great games, the ones I saw in development are... exciting, really. 
New, great games will come.
Unfortunately, they are not the kind of projects that I'm personally, for my career, needing, and there was nothing I or them could do about that obviously. We just become not matched for each other, and it sucks, but it happens. Goodbye and best luck to all Relicans and be proud of Space Marine!
Bye...

03 February, 2012

Normalmaps everywhere/2

First followup to this: http://c0de517e.blogspot.com/2012/02/normalmaps-everywhere.html

Nothing smart really, it's just a small Mathematica playground I made to visualize and experiment with normalmaps, mipmaps and phong, I'm posting it in case you want to play too!

http://www.scribd.com/doc/80425295/Normalmaps-mipmaps-Mathematica-playground

Just to be clear, this is something to have fun and experiment, of course if all I wanted to do was to fit Phong (or any other rotation-invariant BRDF or part of them) I could have done it better in many other ways (first of all, by considering a single parameter that is the dot between the normal and the reflection vector/halfvector instead of working on the hemisphere). Hopefully I'll have something more "serious" coming out on the topic soo.. eventually :)

01 February, 2012

Normalmaps everywhere

You are modeling a photorealistic face. Your artists want to be able to have detail, detail detail. Skin pores. Sweat droplets. Thin wrinkles. What do you do?

If your answer is to give them high-res normalmaps and some cool, sharp specular, think twice. 

Normalmaps are not about thin, very high frequency detail! Especially NOT when rendered by our current antialiasing techniques.

Let's think about a high fidelity model for example, millions of triangles. It's too expensive to render it in realtime, but we can imagine it looking nice. It will have aliasing issues on regions of high frequency detail, like the lashes on a human face, so it requires many samples to avoid that artifact. We bake normalmaps on a low-poly base mesh, and all our problems disappeared. Right? Now it's fast and it doesn't shimmer. Cool. How? How comes that that aliasing magically disappeared? Isn't it a bit fishy?

Normalmaps: a few pixels wide detail
Normalmaps suffer from two main issues.

The first one is that they are not really that good at capturing surface variations. Imagine some thin steps, or square ridges on a flat surface. The normals in tangent space will be all "up" but hey, we had some dramatic discontinuities (steps) where are they? Of course, you lost them in the resolution of your normalmap. Now, steps are not representable no matter what obviously (discontinuity = infinite frequency = no resolution will be enough) but you might think, ok, I create a normalmap with a high resolution, create a bit of bevel around these edges so the normalmap captures it et voilà, problem solved. Right?

This introduces us to the second problem, which puts the nail on the coffin. Normalmaps don't antialias well. They are textures, so on the GPU they are antialiased by some sort of pre-filtering (i.e. mipmaps). We will get always a normal per sample (and MSAA does not give you more shading samples, only depth/geometry ones), in the end (unless we do crazy stuff in the shader that I've never seen done but maybe it would be a cool idea to try...) thus it will take little to zoom out enough to have our normals disappear and become flat again due to the averaging done by filtering. It takes actually _really_ little.

You might want to think about this a bit, and I could make lots of examples, but in the end normalmaps won't cut it, by their nature of being a derivative surface measure they take quite a bit of space to encode details, and can reproduce them faithfully only if there is no minification, so if a feature takes five texels on a normalmap, you might get good results if on-screen that feature still takes five pixels, when you start zooming out it will fade to flat normals again.

Unfortunately some times artists fail to recognize that, and they "cheat" themselves into thinking that certain features are wider than they are, because if they model them realistically they won't show. Thus, we get faces with huge pores and thick lashes or facial hair. 
Or models which are covered with detail in the three-five pixel frequency range but that does not exhibit any finer one geometry-wise (other than the geometry edges) and creates a very weird look where the frequencies do not match between colourmaps, normalmaps and geometry, almost like a badly cast plastic toy whose geometry could not hold well the detail.
It also doesn't help that this issue is largely not present in offline rendering, where proper supersampling is able to resolve subpixel detail correctly from normalmaps, thus there is some mismatch in the experience between the same technique used with different rendering algorithms.

A normalmapping example... from google images (sorry)
So what? 
How can we solve this? By now some ideas should be circulating around your head. Prefiltering versus postfiltering (supersampling). Averaging some quantities and then performing some operations (lighting) versus performing operations on the individual quantities and then averaging? These problems should be familiar, one day I'll write something about it, but always remember than a given rendering technique is not so much useful per se as it is as a part of a "generic" idea that we can apply... For example, a similar-sounding problem could be shadow filtering... and VSM...

But let's step back again a second. The problem with mipmapping basically means that far enough, normalmaps will always suck right? And what we want really is to find a mean to capture the average of the illumination all these bunch of tiny normals that we can't express in a given miplevel would have created. Some sort of "baking"...

Mhm but this averaging illumination... Isn't that what a material, a shader, and a BDRF exactly do? You know, the whole "there is no diffuse material" but really it's just that if you zoom close enough, a lot of tiny fragments in the material are all rough and oriented in different directions and thus scatter light everywhere, in a "diffuse" way... We ignore these geometrical details because they are way too small to capture and we created a statistical around them! Sounds promising. What could we use?

Well, lots of things, but two of them are _really common_ and underused, and you have them already (most probably) so why not...

Occlusion maps: pixel-sized detail
One great way to encode geometric detail, maybe surprisingly, is to use occlusion. The sharper, the smaller a depression is, the more occluded it will be.

And occlusion is really easy to encode. And it mipmaps "right", because it's something that we apply _after_ lighting, it's not a measure that we need to compute lighting. Thus the average in a mipmap will still make sense, as we don't need to transform that by a function that we can't distribute over the mipmap averaging...

Digital Emily demonstrating again specular occlusion

Also, chances are that a lot of your detail will come from the specular, as the specular more high-frequency than diffuse. And chances are that you already have a specular map around... Profit!

Ok but what if we have geometric details which cause sharp lighting discontinuities but not occlusion related. For example, round rivets on a metal tank's body. Or transparent sweat beads on a man's forehead. What could we do then?

Exponent maps: specular detail to the subpixel
The two examples I just made are convenient. Round, highly specular shapes. What does that do, lighting wise?

Photographers are known to use the reflection of the lights in the eyes to help them reverse engineering the lighting in a photo. These round shapes will then do something similar, capture the light from any direction.

And as they are highly specular, they will create a strong reflection if there is any strong light anywhere in their field of view, in the field of view of the hemisphere. This strong reflection, even if it's small, will still be significantly visible even if we look at the feature far enough not to be able to resolve, in our samples, the spherical shape itself.

So that is interesting. In a way this surface which has a high phong exponent and creates sharp highlights, when seen from far away acts as a surface with a much lower exponent, that shines if a light is "in view" from the surface and not only at very narrow viewing angle. Is this surprising even though? Really not right? We already know that in reality our materials are made by purely specular, tiny fragments, that average together to create what we model with Phong exponents of whatever other equation.

Sweat on a face though is usually better modeled, for the typical viewing distances found in videogames, as streaks of lower exponents in the material "gloss" map, instead of using normalmaps. This also completes the answer to the original question, at the beginning of the post...

Conclusion
So that's it, right? Tricks for artists to express detail, done.

Well, sort-of. Because what we said so far kinda works only at a given scale (the occlusion does not). But what if I need to express a material at all scales. What if I want to zoom close enough to see the sweat droplets, and then pan backwards? I would need a model where up close the sweat layer uses normalmaps and a high specular exponent, but far away the normals get "encoded" in the BRDF by lowering the gloss exponent instead...

Well. Yes. We can do exactly that! Luckily, we have mipmapping which lets us encode different properties of a material at different scales. We don't have to straight average from the top miplevel in order to generate mipmaps... We can get creative!

And indeed, we are getting creative. Lean and Clean, and more to the point, Clean translated into exponent maps. Or transitioning from a model to an entirely different one, for oceans or trees...


Now if you were following, you should also understand why I mentioned VSM and shadows... One idea could be to be able to extract variance from the hardware filtering operation (which is a general idea which is _always_ cool, I'll show that in another post... soon) and use that variance somehow to understand how big is the normal bundle we have to express. 


While I was chatting about all this with a coworker, he brought to my attention the work that  COD:BlackOps did (I guess I was distracted during Siggraph...), that is exactly along these lines (kudos to them, I love COD rendering)... They directly use this variance to modify the gloss amount, and to bake that in mipmaps, so we don't have to compute anything at runtime, only during mipmap generation we take the variance of the normalmaps in a given mip texel and use that to modify the gloss... Simple implementation, great idea!

But it's also a matter of educating artists, and I was really happy to see that one of our artists here proposed to drop normalmaps now that our game is rendering from a more open viewpoint, because they were not able to provide good detail (either too noisy if too little filtering or too flat), and resorted to specular maps instead for the purpose.

You might even think that you could take all your maps, and a BRDF, and compute maps and mipmaps for the same BRDF or another BRDF re-fitted to best match the lighting from the original at a given scale... And this is really what I was working on... but this article is already too big and my Mathematica example is too stupid, so I'll probably post that in a follow-up article. Stay tuned...

Sneak peek
P.S. From twitter people reminded me of these other references, which I admit I knew about but didn't read yet (shame on me!): http://www.cs.columbia.edu/cg/normalmap/index.htmlhttp://developer.nvidia.com/content/mipmapping-normal-mapshttp://blog.selfshadow.com/2011/07/22/specular-showdown/
Also an addendum, when thinking about aliasing... what about specular aliasing due to geometric detail? I actually started looking into that before thinking about this. Differential operators in the shaders could be used to evaluate curvature for example, but really I found it to be hard to have problems because of geometric normals even for the dense models I was using, you really need very high exponents, so I dropped that and went to analyze normalmaps instead.