Search this blog

03 February, 2008

Reverse Engineering

Reverse engineering rendering engines can be an excellent learning excercise (even if, it's not legal in some countries). Most of the times, the reversing process is itself intresting, as you delve into a renderer, and apply your knowledge to abstract structures, thus forcing yourself to work on the things you already know, and most of the time the map that you make is not the same that the original developer did, and thus you end up discovering new ideas.

My one and only tool for engine RE. (used with 3dsMax)

For example, when one of the artists I was working with first saw a strangely colored/dithered screen space texture in the video card memory, when running Crysis (before reading the Crytek siggraph paper, of couse), my mental process was more or less the following:
  1. One of the channels, seems to be an ambient occlusion contribution.
  2. The dither seems to be regular. Most probably, it's a 4x4 block where each pixel in the block is mapped to a given sampling direction / strategy.
  3. De-interlacing the blocks in photoshop, confirmed that. But it seemed to me that each pixel was coupled with a given raytracing direction (that was wrong).
  4. Real raytracing is too expensive, and the effect seems to be in post processing, they are probably reconstructing geometrical information from the Z(W)-Buffer. Some artifacts on narrow poles in the scene, confirmed that.
  5. Raymarching on 2d heightfields is something that it's not really new, I.E. relief mapping
  6. ...I guess I can modify a relief mapping shader to do ambient occlusion, by sampling the hemisphere around the normal of every screen pixel, raymarching the ZBuffer...
  7. After reading the siggraph paper about Crytek screen space ambient occlusion I discovered that this is not the Crytek way (I did not reverse the Crysis pixel shader), probably mine is worse, but it was an exciting journey...
Another nice tale comes from reversing Colin McRae DiRT, but maybe I'll write about that later on (the intresting thing I discovered in that one is, as far as I can tell, that they merge all the meshes with the same vertex declarations in order to be able to draw them very fast alltogether without materials, for example, when computing shadowmaps, and to draw in different pieces when the materials are needed, by binding to the merged vertex buffer some ad-hoc index buffers)

2 comments:

Anonymous said...

You can also use NVPerfHud for RE.

DEADC0DE said...

No I don't think you can. NVPerfHud requires (used to require at least) the use of the reference rasterizer (that was intercepted and replaced with HW rendering), that was done to AVOID RE of shipped titles. It the situation has changed, then I'm more than happy. I'll check it.