Search this blog

28 February, 2010

Small BRDF visualizer

Update: There are now at least two tools that do this properly and seriously. This was never a serious BRDF tool, but if you landed here you're probably interested in one. BRDFLab and Disney's BRDF visualizer.

I've just learned from the Real-Time Rendering blog, that Torrance died a couple of weeks ago. Can't be a better time to write a BRDF visualizer, that is something that I always wanted to have handy in order to really understand what's going on with different shading models, and approximations.

So today I tried to write a simple vertex shader to visualize and compare shading models. I thought that visualizing them in the BRDF form would be nice (something like this). Also, I wanted to write this not using Nvidia FXComposer (shit).

So I went on a quest to find an FXComposer replacement. It's not the first time I try and fail at that. This is what I found this time:

Media Player Classic: Off-topic, but it was surprising for me to find out that the latest version does support HLSL shaders, and you can even edit them while the movie is playing. It's cool and I think I'll put this to a good use to prototype post effects and colour correction.

QShaderEdit: looks nice, it's inspired by the Apple OpenGL Shader Builder, with CG support added to it (GLSL is really ugly). I didn't try it as it's source only, and something tells me that it won't compile so easily with Visual Studio. Also it looks like it does not support scripting.

glslDevil: it's not an IDE, only a debugger, and only GLSL. So overall it's useless for my purposes but as a debugger it really looks nice. There is a windows port, so I'll try it some day.

Lumina: GLSL only, but supports scripting. There is a windows port but the whole project looks discontinued. It's a shame as it seems to be very decent. Maybe someone should pick this up.

PyOpenGL_Lab: hooks OpenGl to python, plus it provides some utility functions like camera management. It's the slimmed-down version of GeeXLab, that I tried and found to be really messy. Overall it looks decent, but no CGFX support and as is, it's not so much better/faster than writing your own testbench in C#/SlimDX. Python seems to be the scripting language with the most wrapper libraries. I even found a Cuda and an OpenCL binding for it.

In the end I went back, frustrated, to FXComposer (1.8, I use 2.5 only where 1.8 crashes). This is the result, just attach it to a tessellated sphere and bind the LightPos parameter to a point light.



float4x4 ViewProj : ViewProjection;
float4 LightPos : Position = {1,-1,1,0};
float4 OutColor = {0.3,0,0,1};
#define BRDF_Normal (float4(-1,0,0,0))

void VShader(in float4 inNormal : NORMAL,out float4 outPos : POSITION,out float4 outCol : COLOR)
{
float4 BRDF_OutDir = inNormal;
float4 BRDF_InDir = normalize(LightPos);

float Phong = pow(max(dot(reflect(BRDF_Normal, BRDF_InDir), BRDF_OutDir),0), 10);
float Blinn = pow(max(dot(normalize(BRDF_InDir + BRDF_OutDir), BRDF_Normal),0), 10);
float Lambert = dot(BRDF_InDir, BRDF_Normal);
float BRDF = (Lambert + Phong) / max(dot(BRDF_InDir, BRDF_Normal), 0.01);
BRDF = max(0,BRDF); float OutDirCond = dot(BRDF_Normal, BRDF_OutDir)>0;
outPos = mul(float4((inNormal * BRDF * OutDirCond).xyz, 1), ViewProj);
outCol = OutColor * BRDF;
}

float4 PShader(in float4 inCol : COLOR) : COLOR { return inCol; }
 technique t0 { pass p0 {
VertexShader = compile vs_3_0 VShader();
PixelShader = compile ps_3_0 PShader();
}}

27 February, 2010

New layout

I guess most of my readers are using a RSS aggregator (and that my efforts, even small, to format this, are wasted) but still I changed the style today. Hope it's simpler and more legible.

To not leave this post too empty and too useless, I want to link to this image processing program: Naked Light

The idea is very intriguing, it's a node based editor, and uses OpenCL (OSX only at the moment) to accellerate filtering.
Unfortunately I could not make it work decently on my Macbook Pro, maybe it's a little bit too old for OpenCL, but the approach is interesting.

I found this program while searching in general for a node based alternative to photoshop. I love photoshop, but it's layer / groups / adjustments / smart objects are not ideal when you want to create something that is mostly procedural. Still it's very powerful, but not optimal.

A name card I've made, the compression killed the patterns.
The only hand-drawn elements in this image are the masks.

I wonder if there is out there a OpenCL or Cuda library for processing of big images. I was actually surprised to see that Pixel Bender is limited to hardware supported texture sizes, and does not provide helpers for tiled processing.
As a sidenote Pixel Bender can on the other hand, used as a number crunching engine! 
That's very cool, in games even today we often have too little functionality to schedule generic processing on the GPU (i.e. for DXT compression during load time), and viceversa, to easily process on the CPU interleaved streams, textures and so on...

Also, I wonder how many people do use a node-based aprroach for realtime postproccesing. You might know already that I don't like in general graph-based shader authoring, but I actually favor it for post.

I do belive that it's possible to code generate optimal postprocessing shaders from graphs, you can concatenate all the node trivially, the inputs and outputs are very controlled so you don't risk wasting registers, and the only thing you have to handle is gathering nodes (i.e. blurs), that require to separate the generated shaders in multiple passes.

iPhone iPad iPod

I'm lazy. The iPlatforms are incredibly cool, even if I own only a lowly iPod touch I have to say, it's an impressive product. But I'm lazy.

I don't really care about jumping on the appstore bandwagon, but I'd like to make a few experiments. But in order to make those, I really would like not to have to learn ObjectiveC. I know, that's a problem that a lot of developers have.

I know that ObjectiveC is not bad, maybe it's even better than C++, but it's not interesting enough for me to study as a language, and it's different enough from everything I've played with not to be able to write stuff without reading the documentation...

So I went on for a quest to find a C or C++ example, or library, or something that I could start fiddling with. Of course that's stupid, I could have invested the same time to learn the rudiments of ObjectiveC, but anyway...

The first insight you have to get is that actually ObjectiveC/C++ can compile C/C++ code. So really, you can take the basic OpenGL project provided with XCode, and fiddle with it even just working with C. This post shows how to convert the template project into something that works with C++. 
Simple, but what if you want also to create widgets, play with the touch controls and so on, without calling ObjectiveC functions?
Well, there's not much around, but I think, there is enough to start playing. The most useful library that I've found is openFrameworks. It has an iPhone port, it's C based, and provides a very easy access to OpenGL.

If you want to make something more widget-heavy, the NUI framework looks really cool. 

Oolong engine has some very nifty arm assembly snippets. Very worth a look, the arm VPU is peculiar and fun.

I would also love to hack the iPhone using C#. Unfortunately, monoTouch is commercial only. There are even two (unrelated afaik) incredibly cool hacks to run processing (.js) on the iPhone. 
They leverage on the javaScript processing port, that uses the (cool) HTML5 canvas element. This one seems to be using the Appcelerator framework to execute the javaScript code, while this other claims to be executing using a SpiderMonkey port and a custom made canvas element, implemented using OpenGL|ES.

24 February, 2010

Ergonomy rant

I need more time. The more I gain experience in this industry, the more I realize, time is everything. Nothing else matters. Too often we are worried about adding bells and whistles, marketable features, things to put on the back of the box, and that are ultimately irrelevant. 

Do you want to build the best 3d engine out there? Then forget about VSM, SSAO, DOF, SH, CSM, HDR and similar acronyms, none of those should be part of the features of it.

Learn from cameras (I feel like I made this analogy far too many times). What's the difference between a professional camera, and a prosumer or consumer digicam? Let's consider the Canon 7d as a consumer one, the 5d Mark II as a prosumer, and the 1D Mark IV as the professional. The price difference between those is substantial. What would you expect to be the most striking difference between those? The image quality?

Well if so, you would be wrong. In fact, for some uses, one could argue that the 7d even outperforms the other two. Even in the numbers, the 1D manages to be the one with the least megapixels across the three. Why the 1D is so expensive? Probably surprisingly, it's because it's fast. It can shoot 10 fps versus the 3.9 of the 5DMk2, and it flash syncs up to 1/300 of a second, versus the (nominal) 1/200 of the closest rival. It's a camera made for people that can't miss the shot, and that have to work fast, focus fast, and work with flashes against the sun. Photojournalists.

What's the main difference between the 5D and the 7D then? Well, to me the best feature of the 5D is its big viewfinder. Let's go even cheaper, the 550D, that is the most recent breed of the cheapest digital reflex line Canon has. What's the big deal between that one, and its bigger brother? To me is that the more expensive one, has an extra dial on the back, it's faster and easier to change its settings. Ergonomy again.

And so on really, you can go back and forth, compare cheap compacts with professional film cameras. It's not about the number of fancy features. It's about being able to do your job at the best. Quality, durability, ergonomy, speed.

You need to be able to focus on your job. If you're able to iterate fast, if you have the right utilities and services in your framework, then your 3d engine can be as thin as being directly the native API. It doesn't really matter. You can observe that the faster you can change, tinker, experiment, write code, the less you even need anything else.

For example, it's surely handy to have a tool that gathers metrics and displays performance graphs, to be able to understand what's going wrong in your rendering. I love PIX. But would you need it less, if you could just change your rendering on the fly? 
What about a debugger? It's a fundamental, fundamental tool, and in no way I'm saying we shouldn't leverage on all kinds of tools... But when you're coding in a scripting language, and you can change things while they're running... Well, then, does using prints to debug code look so horrible?

All that is true especially for the expert. The more experienced you are, the more you don't mind the learning curve. You don't need canned components and automatisms. You don't need babysitting. 
A tourist snapping casual pictures may still find handy to have a camera that's capable of putting fancy frames on a sepia-toned image, recognize faces while focusing, and automatically firing the flash when it thinks its necessary. For a professional, being unable to tinker, and having all those useless buttons in the way of his creative thinking, is an hassle.

That said, it does not mean that engineering a compact, consumer digital camera is easy or should not be done. There's a market for that. And we have to cater that market too. Not many (successful) companies are made only by experts in code-crunching. And indeed, in many cases we code to make others work easier, not our own. That might be the right choice sometimes but we should always try to achieve that at no cost for the programmers. 
I've already said this a thousands of times, but building a complex data driven tool to make a given workflow easier, while complicating the overall structure of your code, it's a bad choice. Having a thin, fast, robust, high quality infrastructure, with layers on top to ease the most common tasks, is far bettter.

22 February, 2010

I've just found out...

...that one of the most incredible persons I've ever "met" via the web died last year. It's a sad loss.

F  R  A  V  I  A
.

30 August 1952, † 3rd May 2009

I hope that his website can live on, maybe in the hands of the community of reality hackers and searchers he created: http://www.searchlores.org

http://www.fravia.com/swansong.htm

http://www.scientificblogging.com/quantum_diaries_survivor/blog/farewell_chicco

:'(

05 February, 2010

The pitfalls of experience

Ours is an industry that, even being very young, treasures experience. And it's not hard to understand why. An AAA game is a product that (hopefully) will be sold to millions of users, eager to find faults in your work. 
Usually your project will run from one to three years, from the beginning to the final shipped version with deadlines that most often than not, simply can't slip. In those years, you're going to face all kinds of engineering problems, from writing complicated technologies that have to work in realtime, to resolving intricate software and hardware problems, to managing team dynamics. And everything has to be done under ever changing requirements and designs.

If you add on top of that, that most of the times you're going to work in a framework that effectively discourages experimentation, with punishing iteration times and complicated, badly written legacy code that just works if you don't ever touch it, you will easily understand why expertise is such a valuable virtue.

You have one shot to accomplish anything. You have to do everything right, on time, on budget, and you have to compete in an aggressive worldwide market. Wonder why lately, games are more and more always the same? Why we tend to take no risks?

Experience works wonderfully in those conditions. Because most of the times you already know how to solve a given problem, you're an expert. And all the others, you can still see a reasonable solution. One you'll confident will work. You can quickly discard bad ideas, you understand if something is possible or not, and can estimate how much effort a given thing will take.

Experience is an effective noise filter, it filters out the average ideas other people usually have, and focus on the more productive and safe path. And it's exactly there, that things start going all wrong. You know that a given thing will work because you already made it work in the past, or because you worked on something similar, you can interpolate between things you've done and adapt them. 

You make great things possible, but totally discard the impossible ones. The impossible ideas, or even the wrong ideas, the unreasonable ideas, are the ones that geniuses have. Are the innovations, the change. Ironically, all game companies will have those words in their mission. But few do really know what that implies, and how you can encourage change. 
We've all seen impossible things being done. And then, when we were told the magic behind an idea, a technique, most of the times we see how trivial a given solution really is, if you dare to explore outside the limits of your experience.

"Alice laughed: "There's no use trying," she said; "one can't believe impossible things."
"I daresay you haven't had much practice," said the Queen. "When I was younger, I always did it for half an hour a day. Why, sometimes I've believed as many as six impossible things before breakfast."


This, from "Alice in wonderland", is a fundamental lesson for us to learn. The first time I realized its importance, was by looking at Crysis. One of the artists in my company used a tool (3dripper) to dump all the meshes, rendercalls and textures of a frame of the game. While looking at them, he found a texture that looked like ambient occlusion, but done dynamically, and showed it to us (programmers). 
I didn't know how that was possible, and before seeing it, I would not even have tried to do such a thing. But after I knew it was possible, it took me just a few days to come up with a shader, that then I found was really, really close to what they did. It used the depth buffer, and it did raytracing on it by sampling in steps, using a routine I adapted from relief mapping. All it took was to know it was possible.

Experience is not something that can be avoided. And it's something that indeed is useful. But as you become more and more experienced, make sure to always keep your mind open to new possibilities. 

Experiment. One good thing that helps, is always to be more curious than expert. Knowledge is noise, is made of ideas that you can mix together to create new things. Experience filters the noise. Keeping them at always in a good balance is fundamental.

Talk to outsiders. Make presentations, discuss with other people, juniors, artists, people that have a different view of the problem. Most of the times, their ideas won't directly help you, but they will shake your brain, force to have another look at the problem, give you that spark that can become a new solution. 
I've found myself often having new ideas, or finding better solution, by just presenting the work I've done. Most of the time, it's not even the discussion that helps, I have new ideas while I write the presentation, or while I explain it, even before we start debating about it.