Search this blog

28 January, 2012

Interview mistakes

Interviewing candidates is undoubtedly a tricky and important task for a software company who is seeking for new talents. Books have been written on the topic and while I can't call myself an expert, I do have been through the process enough times, on both ends, to have at least a good opinion of it.

There is a lot to talk about and things are different depending on if you're looking for a junior or a senior, also, a lot has been said already by people which are more competent than I am, so I would like here to focus on my personal experience as an applicant to senior/lead rendering roles, and the mistakes I see more often in the process. It's going to have some rant-ish undertones :)

In my opinion, two are the main problems to take care of:
 
Don't waste time.
Who would you rather employ? Someone who's working, has lots of competing offers, and little free time, or someone who things spending a days on your coding test is the most exciting thing happened to him that week? You want to hire the busy people. 
If you applicant has three companies which are interested in interviewing him (and even in this economy, you know well that for seniors it will happen), who do you think he will try first? The one that lets him a technical interview the day after, or the one which asks for a long test for pre-screening?

Of course your engineers are busy too and pre-screening is effective in reducing the time wasted, but be reasonable. And differentiate between juniors or newly graduates which would probably have more time on their hands and fail more often, and seniors with years of experience. A one hour test is ok, a day long one, not so much, more than that and you'll start risking to be ignored or considered last.

This though does not apply to pre-screening in general, it applies to everything really. Get to the point. Yes, warm up questions might be useful if your applicant seems nervous, but there is really no need to start with things which won't get you any useful information, if you want to chat start with more colloquial questions instead of going straight to equations or code, and bring these to the table after the more loose conversation on a given topic.

A lot of the time waste though comes from not creating different interview paths for the different engineering positions, which also brings me to the next issue:

Show some effort.
Who would you rather employ? Someone who is in love with your company or someone who wants proof that you're worth his time? An interview is usually the first point of contact between the applicant and your company. And you should assume that he is there all the time analyzing you, he's evaluating you at least as much as you are evaluating him.

I never, so far, applied to a company for which I was absolutely persuaded I wanted to work with them when I applied. There are some games that I really would have loved to make, but I find impossible from the products a given company does to understand how working for them will be, and I suspect a lot of games I love are not made in environments I would have loved to be part of. 
There are certain persons who fascinate me and who I would like to work with, but still, every time I interview I'm interviewing the company at the same time.

Put some effort, come prepared, don't waste my time. I have been asked questions which made me seriously doubt the kind of job they were interviewing me for (like once, when my whiteboard math question was about computing the distance between two points...), I've seen questionnaires with errors in them, and have been asked the same "tricky" algorithms over and over again. Nowadays, if someone asks me about polymorphism in C++ and the "virtual" keyword, it's an immediate flag that I raise.

Virtual functions? Distance between a point ad a line? Dot product? But, I hear you say, you would be surprised to know how many technical directors can't code. No I won't, but are you seriously going to think "oh, finally, a lead that knows virtual, let's hire him". 
What are you asking? Is it relevant even or are you just wasting time? Seriously you can't think of a simple function to write, or problem to solve which tests all of these things and more?

Be original! Reversing the words in a string? The building and two eggs "google" problem? Finding the missing number in a sequence? Do your homework, are you going to test if your candidate studies the list of the ten most common interviewing questions, or if he has a working brain?

Even simple variants will do better. The distance between a line and a circle instead of a point. Same thing but it avoids the mechanical "go to my dictionary of ready-made answers I've studied the night before". 
 
There are plenty of simple problems you can invent which can expand in all sorts of interesting directions, craft a few, unique for your company, for each engineering level, and maybe change them every few years.

P.S. So are there companies which did it "right"? My best experience so far was with Sony Santa Monica. Original, effective and required little time, showed effort and respect, which is not surprising knowing how smart Christer Ericson is... Crytek was also good I have to say, I rate it second because the process required way too much time, but talking with Ury Zhilinsky was awesome. The worst ones? Probably left me so unimpressed that I won't remember the companies.

19 January, 2012

Replacing google share: zootool

There are quite a few bookmarking services on the web today, and I never felt the need to find one until Google shut down its handy reader share feature. So, kinda randomly I picked Zootool and I've been working on sharing some links for a few weeks, so now there is a bit of content and I can publish its feed. Let me know if there are problems, otherwise I'll stick to this one to share my finds in the "rendering et al." category, at least until Reeder does not add support for Google+ :)

18 January, 2012

Prototyping frameworks (rendering)

Every now and then, I look around for prototyping frameworks for my rendering work. I always end up with very little but maybe I'm too picky or too lazy. Here are some I've found:

Stuff I actually use:
  • FXComposer, both 2.5 and 1.8 (Nvidia does not host it anymore, so you have to google it) as they both crash/have problems in different ways. In particular, 2.5 seems to have problems clearing rendering targets (I use a pass with ztest always to clear), 1.8 crashes in some situations. 1.8 is also nicer for a programmer, but 2.5 is fairly usable. I use SAS and NVidia has some documentation (finally!) about it, to script render passes. In theory both also support proper scripting, but the documentation is thin. A few times when I wanted to look inside FXC 2.5 I used something like ILSpy or .net reflector to delve in the undocumented parts (that's to say, almost everything).
  • Wolfram's Mathematica. I wrote a couple of articles on this blog about it, it's great and I love it, I love the language, it's not what you would expect if you're a mathematician but for a programmer is pretty neat (well at least, if you like lisp-ish things, which you should, syntax apart)
  • Python/IPython (I like the Anaconda distribution) is a good alternative to Mathematica. I still use Mathematica most of the times and I'm not a Python expert, but I've done a few experiments with it.
  • SlimDX or SharpDX, to tell you the truth I mixed them up a few times, the names are similar. Bottom line, a DX wrapper for C#, I love C#. SharpDevelop if I don't have an updated visual studio which supports the last .net framework.
  • Processing. I wrote an article on the blog about using it with Eclipse for live-coding, it's neat, it's simple, it has a ton of libraries at this point, even to do 3d stuff and shaders but I use it mostly for 2d prototypes.
  • ShaderToy. There are a ton of offline programs that offer similar functionality, even on iPad, wherever, it's very popular. But. Having it online is nifty for some quick tests. Unfortunately crashes often on certain browsers/computers (the most common issue is for large shaders to take too long to compile, making WebGL think something's wrong). There are also lots of alternatives (kickJS editor, Shdr, GLSL playground and SelfShadow's playground), some more powerful (WebGl playground which also supports three.js), but ShaderToy is the most popular.
  • 3D Studio Max. It has an horrible support for shaders (at least it used to, and I suspect not much has changed since) and I never loved it (I love Maya even less though), but I used to know it (six years ago or so) and know maxscript, so I ended up prototyping in Max a few things. It can be handy because you can obviously manipulate meshes all the ways you want and define vertex streams and visually paint attributes on meshes. You can't really control the rendering passes though, so doing non-surface shaders or stuff other than the most basic post-effects is hard. Nowadays I don't use it much if at all.
  • Pettineo's framework. Comes with all his sample projects and it's a great, simple, well written C++/DX11 framework, very easy to toy with. I have my own fork with some improvements.
  • Jorge Jimenez demo framework - as Jorge is a coworker of mine, I have access to his latest version
Seem promising:
  • PhyreEngine, if you have access to the Sony stuff... Might be a bit overkill as it's a fully fledged engine, so the learning curve is not so steep per se but there are tons of examples.
  • Microsoft/DirectX MiniEngine. Quite nice! Also NVidia made Falcor, a "research" framework, but currently it's only OpenGL which is fairly sad (even if understandable as lots of HW extensions come out for OGL first...)
  • Bart Wronski's C# framework. A solid alternative to MJP's, with the added bonus of being C# code.
  • Karadzic's BGFX wraps Dx9, 11, 12, OpenGL, GL|ES and Vulkan! It's a bit higher-level than any of these APIs, providing a draw-centric model where draws are sorted on a per-draw key. Neat, even if I don't necessarily care much about being cross-platform while prototyping.
  • ReedBeta's DX11 framework.
  • Threejs.
Some other alternatives:
  • Erik-Faye Lund published the sources of his "very last engine ever" which is used in a bunch of great demos (as in demoscene) I didn't have the time to look into it much yet, but the name sounds great!
  • Hyeroglyph 3, it's the 3d engine that "ships" with the Practical Rendering and Computation with DirectX11 book (which is nice). It still has a bit more things that I'd like to (more of an engine than a framework) but it's nice. 
  • Matt Fisher BaseCode could be handy for some kind of experiments.
  • Cinder looks still a bit young, it has many nice things but it lacks some other which I would consider "basics". I feel the same about openFrameworks and to me Cinder looks nicer. Plus I don't love C++ that much, and Cinder depends on Boost which is a huge turn off :)
  • Humus Framework 3. This is great, it's simpler than a full fledged engine, it's easy to read and it has tons of examples and Humus is notorious for his graphic demos, which all come with sourcecode and were made with his framework!
  • Intel's Nulstein.
  • VVVV. It's a node-based graphic thingie. Which would seem like the least suitable thing for rendering prototypes, but it supports shaders, and it supports "code" nodes where you can write C#, so it might be worth a try...
  • OpenCL Studio, I used to use this for experimentation, but it seems abandoned, sadly.

17 January, 2012

Videogame photography

For some reasons, videogames and photography are more linked that people might suspect. In my career I've noticed that many if not most of the top engineers I know, also dabble quite a bit in photography (well, I'm biased as I'm a rendering guy and my friends tend to be in the rendering circle). That includes myself (NFSW). But there is more...


Some notable videogame photographers are Robert Overweg, Duncan Harris (Dead End Thills), Francois Soulignac and Iain Andrews, but I'm sure there are way more out there, if you have links to share please comment on this post.

Robert's work focus on exploiting the limits and bugs of games to create artistic fine art series (which is not uncommon either), while Dead End Thills and Iain Andrews are more "conventional" screenshot artists: games get hacked to gain more control or improve quality and the end results are quite amazing!


Professional photographers also dabble in the media time to time. Of course, the main attraction seem still to be the players, and not the games, as in these series by Phillip Toledano and Robbie Cooper (also here), but there are examples of photographers leaving their cameras for the PC to record for example this Skyrim timelapse (was this inspired by a similar video on Eurogamer? or viceversa?) oe using videogame equipment in their photography.

 
Videogame references is photography are of course countless, like this fashion editorial in Neon Magazine.


Lastly even gamers are becoming photographers, not only in games that specifically feature this profession in their characters (Fatal Frame series and Beyond Good and Evil come to mind but I'm sure there are more), but by twisting game rules.
Virtual war photographers enter multiplayer shooter games with the intent not to kill but to report (and not die), like Brunet Thibault (who also enjoys other forms of videogame photography). There are staring to be quite a few examples of this, which is probably going to piss the other players though, it would be great if modern FPS started to support this option (spectator modes are sort-of there, but you can't die and thus don't really "simulate" the role, there is at least one game that is all about war photography, but it doesn't seem on the contrary to allow other roles, so it's still not quite the same).


A notable game which supports photography (and fashion!) without being centered on photographer is Second Life. These guys go so crazy that they actually photoshop their work! Eve Online is another game worth mentioning, not only its avatar creator is amazing, but taking your avatar picture puts you in a (simple) virtual photography studio and the results are fairly decent.


If only Red Dead Redemption or Alan Wake could get more love... Unfortunately being console only titles (I still hope for an Alan Wake on the PC), they lend themselves less to photography, but at least for the former there are a couple of amazing videos out there, including a short film made by John Hillcoat...

04 January, 2012

Current-gen DOF and MB

I was thinking to publish this somewhere else, but Crytek at the last Siggraph already disclosed a technique that is close to this one, so at the end I guess it's not worth it and I'm going to spill the beans here instead.

Depth of field and motion blur, from a post-processing standpoint, look similar. They both are scattering filters, they both need to respect depths. So it makes sense to try to combine both effects in the same filter. And if you think about doing it that way, you're almost already done, it's quite obvious that it's possible to combine the DOF kernel with the MB one by skewing the DOF towards the motion-blur direction.


This is what Crytek does, they use a number of taps, their filter is circular and they just transform this circle with a basis that has the motion-blur axis as one vector an a perpendicular one, scaled by the DOF amount, as the second. It's pretty straightforward, the only thing we really have to take care in this process is regions were the motion blur is zero and thus we won't have the required first axis, which might be a bit of a pain.

As you know, doing these effects in post is an approximation, so when I look at these problems I always think in terms of the "correct" solution first and use that as a background in my head to validate any ideas I have.

In this case what we really want to do is to sample some extra dimensions in our rendering integral, temporal in the case of the motion-blur and spatial, on the camera's film plane, in case of the DOF. This leads quite naturally to raytracing, but thinking this way is not particularly useful as we know we can't achieve comparable results to that, we don't have in post the ability of sampling more visibility  (and shading) than what we have already in our z-buffer, so we should look instead at a model that gives us "the best" of what we can do with the information that we have (to be fair, there is in literature an method which subdivides the scene in ranges with multiple z-buffers and color buffers, and then ray-marches on these z-buffers to emulate raytracing, which I found pretty cool, but I digress even more). With only a z-buffer and its color buffer, that would be scattering.
You can imagine that as placing a particle at each image point, stretched along the axes we just described, with an opacity that is inversely proportional to its size, and then z-sorting and rendering all these particles (and some PC ports of some games did that, having a lot of GPU power to spare, in general placing some particles is not a bad idea especially if you can avoid having one per pixel but selecting the areas where your DOF highlights would be more visible). This, plus a model to "guess" the missing visibility (remember, we don't have all the information! if an object is moving, for example, it will "show" some of the background behind it, which we don't have, so we need a policy to resolve these cases) is the best we can do in post and should guide us in all the decisions we make in more approximate models.

Ok, going back to the effect we are creating, so far it's clearly a gathering effect. We create a filter kernel at each pixel, and then we gather samples around it. Often, this gathering only respects the depth sorting, we don't gather samples if their depth is farther from the camera than the pixel we're considering. This leads to no missing visibility problems, we assume that the surface on the pixel we're considering fully occludes everything behind it, but it's not really "correct" if we think about the scattering model we explained above.
A better strategy would gather a sample only if the scattering kernel of the surface at that sample point would have crossed our pixel, this strategy is what I call sometimes "scattering as gathering". Now the problem with this is that unless we know that our scattering kernels have a bounded size, and for each pixel we sample everything inside that size to see if there is something that could have scattered towards it, we will miss some scattering, but unfortunately doing so requires a large number of samples. 
In particular, as we size our gather kernel using the information at the surface point of the pixel we're considering, we easily miss effects where a large out of focus object scatters on top of some in focus background pixels, for which the gathering kernel would be really small.

Now there are a variety of solutions for that. We could for example fix a minimum size for our gathering kernel, so in the in-focus areas we still "check" around a bit, and we might end up gathering nothing if we found nothing that scatters towards us. This sort-of works but still won't handle large scattering radii. We might think to cheat and get more samples by separating our gathering filter into two passes, each filtering along a line (axis).
This works well for gathering, even when the filter is not really separable (DOF and MB are not, as the kernels, even if we use a gaussian or square one, which are separable, are sized by a function which varies per pixel and is not itself separable) the artifacts are not noticeable, but if we push this with a scattering-as-gathering logic it starts to crumble as the second pass does not have the information of where the first-pass gathering took its samples from, so it can't decide if these samples would have scattered towards a given location and it can't even separate the samples anymore. Digression: In the past I've solved this by doing DOF in two passes, using separable gathering in a first pass while detecting the areas where it fails and masking them using early-z to then do a second pass on them with a large gather-as-scatter filter.

So what we can do? The solution is easy really, we can write our basis vectors to a buffer (which is required anyways if you want to consider MB due to object movement and not only camera movement, the former can't be computed with the information we have from the colour and depth, we need to render the motion of the objects somewhere) and then apply a fixed radius gathering-as-scattering filter there, which as it's only searching to expand the subsequent filtering radii and not sampling colour, can be done with fewer samples without causing too much artifacts, pretty much as "percentage closer soft-shadows" do.

So far, Crytek does something along the very same lines. The twist where the effect I crafted diverges from theirs is that I still employ a separable filter to compute the final blur, instead of using some taps in a circle. The first time I saw this done for MB and DOF was in EA's first Skate game, so it's nothing particularly novel. Skate's implementation though was "driven" by DOF, the motion blur was only added for the camera and it was only present if the DOF was there too (at least, afaik).
Extending this to behave well with the two effects separately requires computing the "right" gathering weights, or as I wrote above, reasoning about the scattering. Also, once you get the ability of doing motion blur without DOF, you will notice that one of the two blur passes will do nothing in areas of pure MB, as the second axes will have zero length (but you are still "paying" to sample N times the same area...). To avoid that waste, I filter along two diagonals, in case of pure MB these coincide but are both non-zero, so we get a bit better filtering for the same price.

I don't think you can do much better than this on current-gen (without proper support for scattering) but I'd love to be proved wrong :) Example code below (it doesn't include many details and it doesn't include the axis scattering pass, but the "important" parts are all there):

// --- First pass: Compute camera motion blur and DOF base axes
float depth = tex2D( DepthSampler, TexCoord );
float2 MB_axis = ( currentViewPos - previousViewPos ) * MB_MULTIPLIER;
float DOF_amount = ComputeDOFAmount(depth);
float MB_amount = length(MB_axis);
float DOF_MB_ratio = DOF_amount / (MB_amount + EPS_FLOAT);
float2 DOF_axis = MB_axis.yx * float2(1,-1) * DOF_MB_ratio;
MB_axis *= max(DOF_MB_ratio, 1.f);
// Compute the 2x2 basis
float4 axis1xy_axis2xy = DOF_axis.xyxy + MB_axis.xyxy * float4(1.0.xx, -1.0.xx);
// Make sure that we are in the positive x hemisphere so the following lerp won't be too bad
//axis1xy_axis2xy.xy *= sign(axis1xy_axis2xy.x);
//axis1xy_axis2xy.zw *= sign(axis1xy_axis2xy.z);
// We have to take care of too small MB which won't be able to correctly generate a basis
float MB_tooSmall = 1.0 - saturate(MB_amount * MB_THRESHOLD - MB_THRESHOLD);
axis1xy_axis2xy = lerp(axis1xy_axis2xy, float4(DOF_amount.xxx, -DOF_amount), MB_tooSmall);


// --- Second and third passes: separable gathering
half2 offset;
if(is_first_separable_blur) offset = GetFirstAxis(TexCoord);
else offset = GetSecondAxis(TexCoord);
half amount = length(offset);
half4 sum = tex2D(ColorSampler, TexCoord) * FILTER_KERNEL_WEIGHTS[0];
half sampleCount = FILTER_KERNEL_WEIGHTS[0];
half4 steps = (offset * TEXEL_SIZE).xyxy;
steps.zw *= -1;
for(int i=1; i < 1+NUM_STEPS; i++)
{
half4 sampleUV = TexCoord.xyxy + steps * i;
// Color samples
half4 sample0 = tex2D(ColorSamplerHR, sampleUV.xy);
half4 sample1 = tex2D(ColorSamplerHR, sampleUV.zw);
// Maximum extent of the blur at these samples
half maxLengthAt0;
half maxLengthAt1;
if(is_first_separable_blur)
{ maxLengthAt0 = length(GetFirstAxis(sampleUV.xy)) * (NUM_STEPS+1);
maxLengthAt1 = length(GetFirstAxis(sampleUV.zw)) * (NUM_STEPS+1); }
else
{ maxLengthAt0 = length(GetSecondAxis(sampleUV.xy)) * (NUM_STEPS+1);
maxLengthAt1 = length(GetSecondAxis(sampleUV.zw)) * (NUM_STEPS+1); }
half currentLength = amount * i;
half weight0 = saturate(maxLengthAt0 - currentLength) * FILTER_KERNEL_WEIGHTS[i];
sum += sample0 * weight0; 
sampleCount += weight0;
half weight1 = saturate(maxLengthAt1 - currentLength) * FILTER_KERNEL_WEIGHTS[i];
sum += sample1 * weight1;
sampleCount += weight1;
}
return sum / sampleCount;

This is golden...

This video, by a Bill Dally (NVidia) about chip architectures is so great it deserves its own post: http://mediasite.colostate.edu/Mediasite/SilverlightPlayer/Default.aspx?peid=22c9d4e9c8cf474a8f887157581c458a1d#

It probably will tell you thinks you already know, but even if it does, it's worth to see it.