Search this blog

23 February, 2008

Next-gen and realism.

First of all, "real" in next-gen games is not usually real as in real world. Is more real as in film versions of the reality. And it's obvious, we want to express our view, not the bare truth. Also it's very fortunate, because we can get creative.

Physically based or not? Is Blinn still right? What looks good, is good in Computer Graphics?
Of course he is. Graphics is about look. But we should care about physics not because we are not able to achieve the correct look by fakes (and there are ps2 games that can prove that), but because fakes are usually hard.

Cheats will always be done. We don't have enough power to not cheat, even considering the simple models of light that are used in nowdays offline renderers. But now doing "the right thing" is a great tool that helps a lot. Ease of use, was the main reason behind Global Illumination in the first place, and now it's the same for the next generation unbiased renderers. It's just more convenient to work in the most accurate light model that you can simulate, because it will usually look right without much tuning.

But always remember also that you should empower artists, don't take it to the extremes. Artists are very good at tweaking stuff, so keep usability always in mind. A fully automated GI solution can be a nightmare for artists if they can't bend the lighting model to their needs, when they need to.

So start with physics, then add hack-ability on that.

In the end, Blinn's motto nowdays has also different meaning to me. It tells me to care about perception and phsicology (uncanny valley? Crysis had to address that problem for example, we are there), as we're getting close to the limit where those components are really important.


DEADC0DE said...

In Italia ci sono delle buone realtà, in primis ti consiglierei la Milestone di Milano, ma anche altre penso che un elenco di tutti gli sviluppatori italiani si sul sito gameprog-ita. Andare all'estero è sicuramente una buona scelta poi ma dipende molto dal tuo livello di capacità. Per quanto riguarda gli studi, una buona facoltà d'informatica, la teoria è molto importante, la pratica non ha bisogno di corsi universitari, ti metti, leggi, programmi, leggi, programmi, e così via. Infine creare una da zero credo sia un impersa titanica. Per fare un gioco che va negli scaffali ci vogliono investimenti grossi, e non li hai senza qualcosa di estremamente valido da mostrare. Per un gioco indie invece, ci sono molte possibilità, alcuni giochi indie sono davvero belli (mi viene in mente crayon physics, o penumbra overture), ma li quello che conta è l'originalità dell'idea. Mettiti in testa di voler fare un clone di Doom indie, e al 99% fallirai (sauerbraten potrebbe essere un eccezione, ma è un gioco free, quindi ha anche dinamiche diverse). Nel caso del gioco indie ti posso consigliare sicuramente il PC come punto di partenza, con due possibili strade, un qualcosa di fatto in C++ da distribuire tramite Steam di Valve (se tutto va bene) o un qualcosa fatto in C#/XNA da vendere sul marketplace live (con XNA puoi pure sviluppare, senza spedere quasi niente, su 360). Se invece hai soldi da spendere e la possibilità d'iniziare una società seria, ti conviene comunque orientarti su piattaforme piccole, WII o DS sono l'ideale. Però servono le idee...

Anonymous said...

Grazie per la risposta e la gentilezza, spero molto un giorno di poter lavorare in questo campo che mi appassiona molto.
Ho notato l'articolo su Shadow of the colossus veramente molto interessante (è anche uno dei miei giochi preferiti per Ps2).
Ringraziandoti nuovamente ti porgo i miei saluti.

Andrea da Udine

Alessandro Monopoli said...

You're right. After all, Physics tells us with great precision how the light works and how it reflects and interacts with materials. There is nothing simpler than doing the things absolutely as they happen :D (well, maybe not so easy, but I'm sure you understand what I'm saying)

Anonymous said...

Mr. Deadcode:,29.0.html

movies here:

xenopus said...

DEADC0DE said...

@soren: wow, a blurry video of a realtime (voxel!) raytracer written in a weird language on a weird OS, exactly what I was looking for! Joking, but why have U used oberon? It's a weird choice for a rtrt, mostly because raytracing is all about simd those days. Kudos for the Menger sponge, really beautiful.

xenopus said...

Dear Deadcode:
Good question. The tracer algorithm is unsuited to GPU computation. It needs a CPU; more specifically, the algorithm traces rays through a pointerlinked tree (like an octree, but more general, or, if you like, more heterogenous). Tracing one ray may require following 10 pointer links. To parallelize this in a SIMD way would require an immense table of ..... actually I don't care to think about it. It isn't possible. OTOH, it parallelizes across multiple CPU cores just fine.
The Menger sponge, or as I call it "Serp" (for Sierpinski) was the original inspiration. I wanted a realtime sponge that I could fly through -- even if it was only 100x100 rays and looked like a postage stamp. (Originally) And now I should revise what I said above. If we restrict ourselves to only rendering one sponge, and make that the whole world, then in fact a GPU tracer should be possible, because the data structure representing the world will be trivially small and can be "inlined" -- written into the shader loop. If you are interested in that, I could construct a minimal tracer that does just that, using only one code module, and send it to you. It should then be easy for you to port it to C or Mono or Python , understand it, and then think about GPGPUizing it; or you might understand it well enough in Oberon and skip the intermediate port. If you ask me, I will do this for you.

It might be possible to incorporate cellular automata / reaction diffusion shaders in the GPGPU sponge tracer, even volumetric ones.

If you are interested, please tell me what OS you prefer so I can advise you on getting Bluebottle running. zz

DEADC0DE said...

@soren: You misunderstood me. When did I talk about GPU? I said SIMD, and this kind of thing is built in any modern CPU nowdays. And it's well possible, if you care to read OMPF and not only post in it, you'll see that many (most?) people in fact are doing SIMD rtrt. By the way, modern GPUs are really a SIMD/MIMD hybrid. Plus, there's quite some research into GPU raytracing as well, but I agree that in your case is not useful.

I would also be curious to know how faster your tree is respect to a more mundane accelleration structure, for dense voxel scenes like yours I would have used a cache-friendly ordering of a 3d array (i.e. swizzled in 4x4x4 blocks or in hilbert curve order).
Maybe with distance clouds built into.

For the Menger alone, the best choice surely is to not use any accelleration structure but to compute (and then cache - memoize) the status of a voxel (filled or not) on the fly while casting the ray through the voxel grid. This is also the approach that the Sunflow raytracer (opensource, java) uses for its Menger sponge primitive.

xenopus said...

"For the Menger alone, the best choice surely is to not use any accelleration structure but to compute (and then cache - memoize) the status of a voxel (filled or not) on the fly while casting the ray through the voxel grid."

I don't completely understand. But the Menger iterative "fractal" (ie fractal expansion stops after n levels) can be traced as a fractal, that is, the geometry is calculated PER RAY, so that the only "structure" is a single 3x3x3 array representing the expansion rule.

As for speed, I doubt that my structures are even close to optimal *for a given "scene"* -- they are a *general case* -- and I do read OMPF. My tracer is very dissimilar to the raytracers usually discussed there. I don't understand why you think that Oberon is unsuited to SIMD. It is a systems level language. There are new compiler extensions that are supposed to enable very efficient handling of large matrices - not that that would help the tracer.

DEADC0DE said...

Soren: I came to this post again to delete a spam comment and I saw your messages. I don't know why I was so dismissive of your work back then... Mah, maybe a bad day, anyhow, all the links are dead now, but I don't think it deserved that.

xenopus said...

I know why you were so dismissive. It is because that is the way that people treat me. I wish you had said something about cache obliviousness in 2008. Here is a film of the newly oblivious tracer:

DEADC0DE said...

Mhm, i don't know, most links are dead so i just assume it was my bad temper. Looking at your new vids, i can't see anything. It's all quite messy and glitchy, it might be the greatest new thing in rendering or utter crap, hard for an external eye to tell... Maybe one day write about it, explain what is the purpose, why it's cool or not etc, and you might attract the attention of more people