Search this blog

18 October, 2008

Impossible is approximatively possible

I'm forcing myself to drive this blog more towards practical matters and less towards anti-C++ rants and how cool other languages are (hint, hint)... The problem with that is about my to do list, that nowdays, after work, is full of non programming tasks... But anyway, let's move on with today's topic: simuating realistic reflections for wet asphalt.

MotoGP'08, wet track, (C) Capcom

I was really happy to see the preview of MotoGP'08. It's in some way the sequel of the last game I did in Italy, SuperBike'07, it's based on the same base technology that my fellow collegues of Milestone's R&D group and I developed. It was a huge work, five people working on five platform, writing everything almost from scratch (while the game itself was still based on the solid grounds of our oldgen ones, the 3d engine and the tools started from zero).

One of the effects I took care of was the wet road shading. I don't know about the technology of the actual shipped games, I can guess it's an improved version of my original work, that's not really important for this post, what I want to describe is the creative process of approximating a physical effect...

Everything starts from the requirements. Unfortunately at that time we didn't have any formal process for that, we were not "agile", we were just putting our best effort without much strategy. So all I got was a bunch of reference pictures, the games in our library to look for other implementations of the same idea, and a lot of talking with our art director. Searching on the web I found a couple of papers, one a little bit old but geared specifically towards driving simulations.

The basics of the shader were easy:
  • A wet road is a two layer material, the dry asphalt with a layer of water on top. We will simply alpha blend (lerp) between the two.
  • We want to have a variable water level, or wetness, on the track surface.
  • The water layer is mostly about specular reflection.
  • As we don't have ponds on race tracks, we could ignore the bending of light caused by the refraction (so we consider the IOR of the water to be the same as the air's one).
  • Water will reflect the scene lights using a Blinn BRDF.
  • Water will have the same normals as the underlying asphalt if the water layer is thin, but it will "fill" asphalt discontinuities if it thick enough. That's easy if the asphalt has a normalmap, we simply interpolate that with the original geometry normal proportionally with the water level.
  • We need the reflection of the scene objects into the water.
I ended up using the "skids" texturemap (and uv layout) to encode in one of its channels (skids are monochrome, they require only one channel) the wetness amount. Actually our shader system was based on a "shader generator" tool where artists could flip on and off various layers and options in 3dsMax and generate a shader out of it, so the wetness map could be linked to any channel of any texture that we were using...

Everything here is seems straightfoward and can be done with various levels of sophistication, for example an idea that we discarded, as was complicated to handle by the gameplay, was to have the bikes dynamically interact with the water, drying the areas they passed over.

The problem comes when you try to implement the last point, the water reflections. Reflections from planar mirrors are very easy, you have only to render the scene transformed by the mirror's plane in a separate pass and you're done. A race track itself is not flat but this is not a huge problem, it's almost impossible to notice the error if you handle the bikes correctly (mirroring them with a "local" plane located just under them, if you use the same plane for all of them some reflections will appear to be detached from the contact point between the tires and the ground).

Easy, you can code that in no time, and it will look like a marble plane... The problem is that the asphalt, even when wet, has still a pretty rought surface, and thus it won't behave as a perfect mirror, it will more be like a broken one. Art direction asked for realistic reflections, so... for sure not like that.

Let's stop thinking about the hacks and let's think about what happens in the real world... Let's follow a ray of light that went from a light to an object, then to the asphalt and then to the eye/camera... backwards (under the framework of geometrical optics, that's what we use for compute graphics, you can always go backward, for more details see the famous ph.D. thesis by Eric Veach)!

So we start at the camera, we go towards the track point we're considering, from there it went towards a point on a bike. In which direction? In general, we can't know, any possible directon could make the connection if it does not have a BRDF value of zero, otherwise that connection will have no effect on the shading of the track and thus we won't be able to see it. After bouncing in that direction, it travels for an unknown distance, reaches the bike, and from there it goes towards a light, for which we know the location.

Now simulating all this is impossible, we have two things that we don't know, the reflection direction and the travelled light ray distance between the track and the bike, and those are possible to compute only using raytracing...
Let's try now to fill the holes using some approximations that we can easily compute on a GPU.

First of all we need the direction, that's easy, if we consider our reflections to be perfectly specular, the BRDF will be a dirac impulse, it will have only one direction for which it's non zero, and that is the reflected direction of the view ray (camera to track) around the (track) normal.

The second thing that we don't know is the distance it travelled, we can't compute that, it would require raytracing. In general reflections would require that, why are the planar mirror ones an exception? Because in that case the reflection rays are coherent, visibility can be computed per each point on the mirror using a projection matrix, but that's what rasterization is able to do!
If we can render planar mirrors, we can also compute the distance of each reflected object to the reflection plane. In fact it's really easy! So we do have a measure of the distance, but not the one that we want, the distance our reflected rays travels according to the rough asphalt normals, but the one it travels according to a smooth, marble-like surface. It's still something!

How to go from smooth, flat, to rough? Well the reflected vectors are not so distant, if we have the reflected point on a smooth mirror, we can reasonably think that the point the rough mirror will hit is more or less around the point the smooth mirror reflected. The idea is simple so, we just take the perfect reflection we have in the render-to-texture image, and instead of reading the "right" pixel we read a pixel around it, in a direction that will be the same as the difference vector between the smooth reflection vector and the rough one. But that's difference is the same that we have between the geometric normal and the normalmap one! Everything is going smooth... We only need to know how far to go in that direction, but that's not a huge problem too, we can approximate that with the distance between the point we would have hit with a perfectly smooth mirror and the mirror itself, that distance is straightforward to compute when rendering the perfect reflection texture or in a second pass, by resolving the zbuffer of the reflection render.

Let's code this:

// Store a copy of the POSITION register in another register (POSITION is not
// readable in pixel shader S.M. <>
float2 perfectReflUV = (IN.CopyPos.xy / IN.CopyPos.w)*float2(0.5f,-0.5f) + 0.5f;

// Fetch from the screenspace reflection map, the approximation of the track to
// reflected object distance... It has to be normalized between zero and one.
float reflectionDistance = tex2D(REFLECTIONMAP, perfectReflUV).a;

// Compute a distortion approximaton by scaling by a constant factor the normalmap
// normal (expressed in tangent space)
float2 distortionApprox = normalMapNormalTGS.xy * DISTORTIONFACTOR;

// Fetch the final reflected object color...
float2 reflUV = perfectReflUV + distortionApprox * reflectionDistance;
float3 reflection = tex2D(REFLECTIONMAP, reflUV).rgb;

That actually works, but it will be very noisy, especially when animated. Why? Because the frequency of our UV distortion can be very high, as it depends on the track normalmap, and the track is nearly parallel to the view direction, so its texture mapping frequencies are easily very high (that's why for racing games anisotropic filtering is a must). That's very unpleasing especially when animated.

How do we fight high frequencies? Well, with supersampling! But that's expensive... Other ideas? Who said prefiltering? We could blur our distorted image... well, but that's quite like blurring the reflection image... well, but that's quite possible by generating some mipmaps for it! We know how much we are distorting the reads from that image, so we could choose our mipmap level based on that...
Ok, we're ready for the final version of our code now... I've also implemented another slight improvement, I read the distance from a pre-distorted UV... That will cause some reflections of the near objects to leak into the far ones (i.e. the sky) but the previous version had the opposite problem, that was in my opinion more noticeable... Enjoy!

// Store a copy of the POSITION register in another register (POSITION is not
// readable in pixel shader S.M. <>
float2 perfectReflUV = (IN.CopyPos.xy / IN.CopyPos.w)
*float2(0.5f,-0.5f) + 0.5f;

// Compute a distortion approximaton by scaling by a constant factor the normalmap
// normal (expressed in tangent space)... 0.5f is an estimate of the "right"
// reflectionDistance that we don't know (we should raymarch to find it...)
float2 distortionApprox = normalMapNormalTGS.xy * DISTORTIONFACTOR * 0.5f;

// Fetch from the screenspace reflection map, the approximation of the track to
// reflected object distance... It has to be normalized between zero and one.
float reflectionDistance = tex2D(REFLECTIONMAP, perfectReflUV + distortionApprox).a;
distortionApprox = normalMapNormalTGS.xy * DISTORTIONFACTOR * reflectionDistance;
// we could continue iterating to find an intersection, but we don't...

// Fetch the final reflected object color:

float2 reflUV = perfectReflUV + distortionApprox;
float4 relfUV_LOD = float4(
float4(reflUV,0,REFLECTIONMAP_MIPMAP_LEVELS * reflectionDistance));
float3 reflection = tex2Dlod(REFLECTIONMAP, relfUV_LOD);

Last but not least, you'll notice that I haven't talked much about programmer-artist iteration, even if I'm kinda an "evangelist" of that. Why? It's simple, if you're asked to reproduce the reality, then you know what you want, if you do that by approximating the real thing you know which errors you're doing, hardly there's much to iterate. Of course the final validation has to be given by the art direction, of course they can say it looks like crap and they prefer a hack over your nicely crafted, physical inspired code... But that did not happen, and in that case, a physically based effect requires usually way less parameters, and thus tuning and iteration, than a hack-based one...

Update: continues here...
Update: some slight changes to the "final code"
Update: I didn't provide many details about my use of texture mipmaps as an approximation of various blur levels... That's of course wrong, it may be very wrong if you have small emitting objects (i.e. headlights or traffic lights) in your reflection map. In that case you might want to cheat and render those object with a halo (particles...) around them, to "blur" more without extra rendering costs, or do the right thing, use a 3d texture map instead of mipmap levels, blur each z-slice with different kernel widths, maybe consider some way of HDR color encoding...

3 comments:

Simon Strandgaard said...

Nice result and good explaination.

Anonymous said...

This brings back memories of our work together ...
your old mates APPROVE this post very much! :)

DEADC0DE said...

nub: I don't know how that effect has been used in MotoGP, for sure in our shader generator you could use the normalmap only for the reflection (where for sure means most probably, iirc :D), in the next post there's an hint on why it's wise to do so...