Search this blog

19 March, 2010

PIX is great but...

...sometimes using colors on the screen is actually better. The trick is all in the iteration time. PIX gives you accurate numbers and step-by-step execution of a shader. But if you can easily change/compile/reload shaders and quicky arrange dedicated views on screen of some render passes, then you might actually find your bugs better, even if all you have are colors.

I just finished debugging a very nasty bug involving coordinates in different spaces and decoding of the z-buffer. I worked a lot in PIX but I managed to find the problem only when I abandoned it and started comparing the data I recovered z-buffer view with the ground truth I had from the main rendering pass.

I quickly arranged a debug view of the data I recovered from the Z-buffer, by displaying it on a corner of the screen. Then I changed the main shaders to output the data I needed. A few changes later, I fixed the problem.

That was only possible thanks to the fact that our rendering is scripted (so it's easy to add such debug-views and stuff like that) and that I can modify the shaders and they get automatically updated in game.

I guess that's not surprising to anyone that worked with scripting languages as well. Most of the times if you can live-edit your code, you end up not using the debugger but just inserting print statements to see "live" what's happening.

The next step, for shader debugging would be for me to create some more live debugging visualization stuff. At the moment I can easily just render things, it would be cool if I could place on the rendered images some "color pickers" that also display an accurate numeric reading, or that are capable of interpreting colors as normals and so on... Visual debugging is great!

18 March, 2010

Homework 2

1) Choose a component in your game/project that is both small and functionally well isolated. In example, the camera system
2) Identify the modules that depend on its functionality
3) Identify the modules that functionality depends on
note: You might want to use dependency matrices to do this. You can use the excellent NDepend, or similar tools like Understand, Lattix, Visual Studio 2010 Team System, JDepend, Structure 101, DTangler.Net Reflector (opensource) and so on (opensource),

Now, for each of the dependency, think about them. Remove from the list all the dependencies that you think you could easily eliminate, because they are not really needed. Remove all the ones that are behind well defined interfaces (that's to say, they don't depend on the implementation, or in C++, they don't require static linking of the dependent object files).

Everything that survives this process, is a big problem. If you end up having many of those dependencies, on a component that you choose because it was small and functionally well isolated, how many do you have everywhere else? 
Is it strange now that your compile and link times are _so_ high? What if you want to trash a component, and rewrite it? Code does degenerate, but if it's not isolate that degeneration will turn into a cancer that you can't operate.


16 March, 2010

The world changed!

I'm young. Not too long ago, I was still an university student and back then I remember I liked to learn almost everything about computer science, and more. The only field that really I avoided (with pride), was web programming. JSP, SQL, I did know almost nothing of those things, and I liked it. Those were... simply not cool.

Well, I'm regretting that. There is so much going on about the web nowadays even if economically, we're still far from the .net golder era. But technologically... Oh my. I realize that most people in most companies and universities are still stuck with Java EE and JSP and that's why a lot of us didn't notice that now there are plenty of cool things going on the web side of the programming.

I'll list some that you should have a look at, if you don't know them yet (check the links...)

Everything Google, Amazon and the Apache foundation do...

14 March, 2010

Collaborative design experiment

Every time I discuss about engine design, there is quite some interest and feedback going on.

So I figured out I could start a little experiment.

I've created a shared notepad with a few ideas, just open it and start adding your considerations, proposals and so on.

Update: I'd consider this experiment ended. I'm happy to see that many people took part of it, and it was a very nice experience, with some good discussions.

In the end the result may be probably more the expression of what I think than community driven but I think that's understandable, I don't believe that you can really successfully design something in a crowd. Also, I'm a tyrant and that does not help.

I'll post here a snapshot of the result of this game, but I'll keep the pad up and running until it dies naturally.

I'll be organizing something similar again soon, but I want to find something more specific and focused. Maybe some sort of competition using codepad.

- Layer -1: Prerequisites
  • Coding guidelines
  • Along the lines of: Fuck C++ OOP. Data is the king. Keep compile times low.
  • Build system
  • Continous build system
  • Static syntax checking
  • Versioning system
  • Code review system

- Layer 0: Language extensions

  • Compiler abstraction
    • Why: Memory alignment and packing, forcing inline, branch hints, asserts and so on and on
  • Stack walker
  • Module/services system
    • Why: avoid dependency hell. A module should be able to request access to another module, and depends on its interface, but it should not assume that any other module is up and running in order to execute. Code hot-reloading. Faster iteration.
    • Proposal
      • Each module will expose its interface for the other modules to statically link to. Upon loading, the module registers an instance implementing the interface in the module system, that is a (name,pointer) hashmap. Modules can declare dependencies on other modules, failing to load if the dependency is not loaded already, and being notified if the dependency unloads.
      • On platforms that support dynamic linking we support  hot-reloading of modules (serialization/unload/load/deserialiaztion).  Even basic services, like reflection or logging, should be considered  optional and made into a module.
  • 'root' object model
    • Why: we need to have a common base interface for our objects. That will be used for RTTI (really, just a typeId, reflection can take care of everything else) and lifetime management (i.e. reference counting, creationg etc). A COM-like interface can be a solution.
  • Memory allocators
    • Why: Performance, debugging, quotas etc...
    • Proposal
      • A thin interface to request allocations and deallocations. Overrides for built-in C/C++ allocators to reroute third-party code. A configurable allocation framework that lets specify which algorithms to use for a given pools, memory quotas etc. Logging to debug fragmentation and leaks. Should support thread specific memory pools as well to avoid costly un-needed context switches in certain systems (would turn a bit complicated if ptrs were shared between threaded systems though)
  • Object lifetime management
    • Why: Explicit allocations and desctruction is not composable. It doesn't work well with a modularized code base.
    • Proposals
      • Smart-pointers. They are the only structure that lets you implement either garbage collection or reference counting. Prefer using only those
      • Garbage collection: Reference counting is not so cool. Can leak in nasty ways (loops) and it's not so fast either. Requires to queque destructions in order to be executed in the "right" thread (i.e. graphic objects)
      • Reference counting: A performant garbage collector is very hard to write. RC should not be made using intermediate handlers to hold refcount, for maximum performance the refcount should be inside the classes. Also, refcounts should not be explicitly accessible. Each class can implement its own AddRef/Release, the smart-pointers will statically dispatch those calls via templates. AddRef/Release methods still have to be virtual in order to support reflection (i.e. for scripting languages).

- Layer 1: Core libraries

  • Multithreading support
    • Why: To support multicore and heterogeneous computing environments
    • Proposals
      • Threads and locks support (C++ does not have a standard, we need our interface - it has now in C++0x ). Job scheduler/Thread pool with dependencies between jobs (data parallelism). Has to be SPU friendly. ParallelFor support. Dependencies between jobs can be of different types: normal (just sync), data (a job writes in a memory area, the dependants have access to that, data-parallel (a job processes an array of data structures, the scheduler manages the splitting of the array in different instances of the job), buffered (the jobs writes in a circular-buffer, the dependants read what the job produced last time it ran), steamed (like data-parallel, but assumes that the dependents do not need to wait for the whole array to be processed but can be fired as soon as any element is done).
      • Inter-thread messaging is needed in many cases (i.e. schedule GPU work from the game or loading threads that do not own the rendering device)
      • Check out: Intel TBB, Apple Grand Central, C# Parallel FX
  • Resource layer
    • Why: provide an abstracted access to file and network resources. Support for hotloading/swapping/streaming/async loading
    • Proposal
      • Hotloading means that everything has to be accessed using handles... (Not neccesarily, but not using them makes it messier. You have to notify the resource consumers that a resource has changed)
  • Reflection system
    • Why: Reflection/Introspection is needed for serialization, networking, live-editing (tools),  scripting, garbage collection and so on
    • Proposals
      • A simple name hash for the type id. Parse the program debug database (i.e. PDB)  and convert it into a reflection DB, as a compile step. Provide a pretty  generic interface for reflection queries
      • Pros: Avoids template/macro magic and stuff like that. The  database can be loaded on demand and split to alleviate memory  requirements. Complex queries are possible. Changing implementation does  not require changing everything in the source. The database is  accessible easily from other languages, can be used to gather code  metrics, do coverage analysis and other stuff. Works with third-party  code and libraries
      • Cons: Less easy to provide custom attributes. Serialization often requires custom  functions for some class members. Calling reflected functions (i.e. to  instantiate a reflected class) requires compiler-specific trickery (You  can use packaged function pointers - elaborate?)
  • Math library
    • Proposals
      • Linear algebra: classes for Vector2, Vector3, Vector4,  Matrix4x4, Matrix2x2, Quaternion and ScaleRotationTranslation class.  Geometry: classes for primitive intersection, distance and various other  shit as needed. Classes for VPU abstraction (SIMD). Random numbers...
      • Pros: Having an SRT class lets you use a 3x4 affine matrix or  quaternions internally. Also you can make sure that no skews are ever  generated. The main math library should be SIMD friendly (align data)  but VPU operations can be actually slower for general purpose  computation (i.e. when doing a lot of logic, and scalar operations,  branches). SIMD processing should be hand-optimized inside loops.
  • Collections library
    • Why: STL is not alignment-friendly. STL is not designed to be used in interfaces. Standard STL implementations are very hard to debug. Standard STL  implementations won't play nicely with reflection. Performance varies on different platforms.
    • Proposals
      • We need a few specific containers that are fast when used as  templates, but can be also generically passed around (an inspiration can  be the C# containers). The API should stay close to STL when possible.  vector, fixed_vector, hash_map, queue, stack, lockless_queque, n-ary tree, skip lists, probabilistic set (i.e. bloom filter), perfect hashing...
      • Second option: Creating custom containers is too hard, just choose a good STL and add to that (memory allocators that respect a given alignment and maybe some extensions in terms of algorithms). Reflection would still be quite a problem
  • Global parameter system
    • Why: we need support for shared runtime parameters, reflection  is 'only' about per-object stuff. Also reflection can't handle a  parameter database like the one that is needed for shader parameters and  so on... 
    • Proposal
      • Parameters have to be organized in sections. We can have  different types of sections, but I'd say we need at least: "external"  read-only and linked to a database where the values can be organized in a  meaningful way, "shared" where a spinlock is used to regulate the  access to the class, "thread" that have to be accessed always from the  same thread, "threadmessage" that are like "thread" but support a list  of callbacks to be called when a parameter chages.
  • Simply networking  layer
    • Why: to connect to pc-side tools. Needs to provide RFC (can leverage on  reflection!). RFC can be used for automated testing as well...
  • Profiling
  • Logging
  • Serialization/versioning
    • Critique
      • To me here there is a dilemma.  Should the framework be based, for its data, on reflection and serialization, or should it use a parameter database (materials,  properties) and specific file formats (meshes, textures...)? Of course  an answer could be "both" and I guess you need code for both, but there  is still a design decision to be made here
- Layer 2:  Low-level rendering infrastructure

  • Rendering device abstraction
    • Why: to support multiple platforms/API
    • Proposals
      • DX11 style.  No direct access to single renderstates and resources, block uploads.  Support for multiple devices/command buffer recording. Mesh dispatch  interface. Needs wrappers for meshes/streams/textures etc. Native API have always to be usable:  device has to be stateless, or support state flushing/invalidation.
  • Debug rendering interface
    • Why: tools, graphs etc
    • Proposals
      • Primitives,  text, lines etc... Immediate-mode drawing  (beginPrimitive-pushVertex...-endPrimitive). Thread-safe (i.e. every  command gets pushed into a lockless queque). Needs to support command  sorting (i.e. differentiate between 2d draw and 3d debug objects)
  • Debug GUI
    • Why:  widgets and so on...
    • Proposals
  • Effect system
    • Why: To simplify creation of  shaders and implentation of graphic techniques
    • Proposals
      • Similar to D3DXEffect, CgFx
      • Natively integrated into  low-level rendering infrastructure
  • Parameter editing GUI
    • Why: In game inspection and editing  of the parameter system, uses the debug rendering and debug gui
  • Graphic resource loaders
    • Why: Implementation of resources (see  the resource layer) for graphics assets
  • Graphic data manipulation
    • Why: Conditioning and runtime data  generation. We need texture and mesh tools, that make working on the CPU  with those resources as easy as doing that in the GPU. DXT compression  is also needed.

- Layer 3: User-level libraries
  • Scripting system/hooks
    • Proposals
      • Code-generated bindings from the reflection data
  • still a lot of things missing obviously here, i.e. skinning, vertex/uv/geometry processing, lods, shadows, posteffects and so on. No consensus was reached on those
- External Layer: Tools
  • Reflection inspector
  • Automated testing

    12 March, 2010


    Cool stuff people showed me, and that I'm re-posting here.

    New ways of editing code in this very cool Java IDE experiment: CodeBubbles

    Distance fields used for shadows in UE

    Distance fields for raytracing, in Go

    A free data mining tool for games: Echo Chambers

    Processing-made Mycellium

    Color-grading should be better understood in games

    This scribble tool does something that looks arbitrary, but feels "right" while drawing.

    09 March, 2010


    My counter has registered more than 68000 visits (well, not counting the RSS accesses, that I would expect to be the most of them... but anyway). Time for an Amiga tribute*!

    P.S. I do not endorse any group of individuals that still think the Amiga is alive and better than anything else. The Amiga is not a Commodore 64, it's dead it was a nice machine but now is dead. And no, its multithreading is not better than Windows, and people don't care a shit that you can format a floppy disk while doing other things.

    07 March, 2010



    My current favorite: Bad company 2, outdoors it totally nails the look. 
    Mass effect 2 has some very nice lighting in some real-time non playable sequences but its skin model is remarkably flat. 
    Fight Night is impressive too, it's by far the most complicated of them all in terms of lighting and skin effects, and it's the only one that has no shadowmap problems...

    p.s. My job/goal this year is to beat them all. It will be tough, and it is stressful... We'll see.

    06 March, 2010


    1) Write down all the ideas you have in your work/programming hobby on post-it notes.
    2) When you have the chance of implementing a given idea, move its note to one of the following stacks: success, failure, unknown.
    3) Make sure to scan the unknown stack time to time to see if those ideas, after more testing, can be moved into one of the other two stacks.
    4) At the end of your project, count the total number of notes and the ratio of successful ideas over failed ones.

    If you have less than ten notes, you're being assigned mindless tasks, either you're a coop or you should change your job.

    If you have a lot of notes, but mostly successful ones, you're probably not risking enough. Your ideas are boring, you're not experimenting, you're living on your experience.

    If you have a lot of notes, and a lot of failures, congratulations. You're either utterly incompetent or you are a genius. Anyway, you're trying to create something new (or at least, new to you...). Hope you're surviving the next waves of layoffs and be happy.

    I had the urge of writing this after the discussions I had about my previous post. It seems to me that there are a lot of preconceptions that are so tough to change even if they are clearly wrong.
    Our industry moves fast, and some days you feel like being on the bleeding edge of the technology. Some other days, you look around and you wonder how's possible that we're doing our work with the same tools and mindset of twenty years ago.

    Is it really so debatable that C++ is outdated? That reference counting is not so great? That transform hierarchies should not be the fundamental data structure of your 3d engine? Or that Collada (or any intermediate format, really) is indeed a waste of time for games?
    I wrote about those things and more on this blog, and each time I write about them, I find a lot of resistance.

    Are we that scared of trying new things? Can we really never fail?

    Some weeks ago a coworker of mine showed me a presentation that we had at my company about the things we could do with the "next gen". And we were pretty disappointed to see that those early predictions were still more or less the things we are doing right now. We had the experience to know what we could do but we didn't try to do much more than that. We didn't try things we didn't know...

    Of course, the next step would be asking ourselves what we can do to be better than that. Personally I suspect we're not experimenting more mostly because we really can't do things fast enough. We're slow, our iterations are slow, our languages and tools are hardened and inflexible. But digging into that could be the scope of another post...

    p.s. Check out this GUI idea.  I suggest to watch the first minute or so of the video, stop it, think about it for a while, then finish it/visit the forums. I was really surprised to see some innovative thinking in an area so stale and so painful as is the one of game GUI/Frontends. I don't think that this approach scales well, but anyway most games do not have too complicated interfaces, and for an indie project it's the perfect approach. Kudos.

    02 March, 2010

    3D Engines out there

    Every now and then people ask me for recommendations on 3d engines for their projects. Honestly I'm not such an expert on the topic, I've always written or used in-house solutions, so my knowledge about free middleware is pretty thin.

    On top of that, it does not help that most of the engines I've checked out looked terrible to me. You would expect people trying new and cool ideas in the opensource world, where you're not tied to deadlines and products.
    You'll be wrong or at least, I've never seen anything interesting or new among the thousands of engines sources you can find on the net. Most of the times, they follow the same pattern, they are textbook engines. 

    They are all scenegraph based. They don't really care about caches or threads. They don't care about iteration time or ergonomy. They support all the wrong formats (i.e. the horrible Collada XML) and they're all busy implementing every "technique" out there. They go against everything I advocate on this blog :D

    But enough of this. Here it comes my list of notable opensource 3d engines out there:

    Ogre3D. This is perhaps the most famous one. The good thing is that there are some commercial products using it, most of them are not graphically intensive, but still, a good sign. 
    The community is huge, the documentation is huge, you'll expect this engine to be well tested.  Also, there are  bindings for any language out there, and even a C# native port (Axiom).
    The architecture does not look very interesting, the engine is big and complicated, but it focuses more in getting a lot of things done than solving the fundamental problems that a 3d engine should address. For example, it has little support for multithreading, just some locks and they seem to be not too tested as well. 
    It's packed with a lot of ready made stuff, from terrain to shadows, to LODs,  particles, various culling methods and so on. All that I can say it that the quality of those black boxes varies at best though. A few years ago I remember looking into its lispsm shadows only to find it seriously broken.

    Nebula (UPDATE: last? version in the comments here, Radon labs is no more, Nebula 2 could be interesting too, Nebula 3 is a rewrite - latest version, unofficial community?): This is actually the only opensource 3d engine that I  really like among the ones I've looked. It's made by the game company Radon Labs, that offers it to the public.
    Focuses on a lot of the right things, cares about the general infrastructure more than specific features. Also it has been used on commercial titles, with great results. Radon Labs ships titles for all the consoles, but obviously the opensource engine can't include parts of the 360 or Ps3 devkits. That said, their experience outside the PC realm, where you can push polygons for free, without needing to care much about performances or good coding practice (if you're not shipping an AAA title, that is) is a bless! 
    Recommended, even if the community does not seem to be huge or too active, it seems to be mostly carried over by Radon alone. That means that it's maybe better not to jump on it if you're a total beginner that does not want to learn from the source...

    OpenSceneGraph: This one is interesting, it focuses on performance and multithreading,  even if it's OpenGL only and it focuses a lot on portability, it's more a visualization engine than a game one, somewhat like NVidia Scenix or the old OpenGL Performer.
    Here you can find its mission, and here its forum and development blog. Obviously it's scenegraph based and as I said, it's looks like it's an opensource replacement of  the dead Performer or the never-born project Fahrenheit. So overall I won't use it and you need to keep in mind what's its purpose, but overall it has with lots of things going on, a good community and a clear design. There is even a nodekit for postprocessing, and some extensions to use Cuda and OpenCL.
    Same really goes for OpenSG, both are worth a look but I won't recommend either for games or as a guide to design a 3d engine.

    Panda3d: Another engine I don't know much about. But it has shipped titles, and from what I can see, I would prefer this as an alternative to OGRE.

    Sauerbraten: The guy behind this engine/game (Wouter van Oortmerssen) is a genius, he implemented more programming languages than the ones I know, and he worked on Far Cry! Sauerbraten is a very fast 3d engine, to the point that it has even been successfully ported to the iPhone. Surely interesting, even if it's very specialized in what it does, it's worth a look.

    Quake 3: It's old, it's very specific, and it's not actively supported. But it's a serious commercial engine, and it was made by ID. That's more than enough.

    Other worth mentioning. Horde3d: I don't know much about this one, but some of its features sound right. From what I can tell, there's no title shipped with this one, and moreover it's OpenGL only at the moment, that is a bad sign. Oolong engine: iPhone only. Quelsolaar stuff: Ideas that are all over the place. Most of them are nice, some are more toys than productive environments, anyway, worth a look. 
    After all this I'd still say that if you are looking at an engine to learn from, take most of those cum grano salis. On the other hand, if you just want a platform to quickly develop a game, maybe you should pay the small price that comes with Unity, or try the UDK. Or simply, stick with 2D...