Search this blog

Showing posts with label The industry. Show all posts
Showing posts with label The industry. Show all posts

21 October, 2016

Cinematography, finally!

I made some screen grabs from the recently released Red Dead Redemption 2 announcement trailer.

These were quite hastily and heavily processed (sorry) in an attempt to make the compression artifacts from the video show less.

Basically these are an average of 5-6 frames, a poor man's temporal anti-aliasing let's say (and the general blurring helps not to focus on the general balance of the image instead of being distracted by small details), and tons of film grain to hide blocking artifacts.

I am in love. Tasteful, amazing frames that don't look like something that was just arbitrarily color-graded to make it seem somewhat cool last minute like a bad Instagram filter.

Enjoy. Ok, let me spend a couple more words...

Some made this to be a technical comparison between titles. In a way it is, but it's not a comparison on resolution and framerate, or amount of leaves rendered, texture definition and so on.

In all these accounts I'm sure there are plenty of engines that do great, even better than RDR2, especially engines made to scale to PC.

Thing is, who cares? This shouldn't matter anymore. We're past the point where merely draw more pixels or more triangles is where technology is at. We're not writing tri-fillers, we're making images. Or at least, we should be.

Technology should be transparent. Great rendering should not show, great rendering happens when you can't point your finger and name a given feature, or effect implementation. We are means to and end, the end is what matters. And it is a matter of technology, or research and of optimization. But a different one.

It is absolutely easier to push a million blades of grass on screen than to make a boring, empty room that truly achieves a given visual goal.

It is absolutely easier to make a virtual texturing system that allows for almost one-to-one texel to pixel apparent resolution than to accurately measure and reproduce a complex material.

It is absolutely easier to make a hand optimized, cycle-tight uber post-effect pipeline than to write artists tools for actual rapid iteration and authoring of color variants in a scene.

I'm not claiming that this particular game does any of these things right. Maybe it's just great art direction. Maybe it's achieved by lots of artists iterating on the worst engine and tools possible. Or maybe it's years of R&D, it's very hard to tell, even more from a teaser-trailer (albeit it totally looks legit real-time on the platform). But that's the point, great rendering doesn't show, it just leaves great images.

And I hope that  more and more games (RDR is certainly not the only one) and art departments and engineers do realize that, the shift we're seeing. Rendering hasn't "peaked" for sure, but we need to shift our focus more and more towards the "high level".

We solve concrete problems, technology otherwise is useless. And, yes, that means that you can't do good tech without artists because you won't ever have great images without great art. Working in isolation is pointless.

And now... Enjoy!















The first Red Dead Redemption also holds up quite well all things considered... These are respectively from the introduction cinematic and one of the first missions:



And as "bonus images", some screens I've found from the recent Battlefield 1 and The Witcher 3.




28 August, 2016

A retrospective on Call of Duty rendering

In my last post I did a quick recap of the research Activision published at Siggraph 2016. As I already broke my long-standing tradition of never mentioning the company I work for on my blog, I guess it wouldn't hurt, for completeness sake, to do a short "retrospective" of sorts, recapping some of the research published about the past few Call of Duty titles. 

I'll to remark, my opinion on the blog is as always personal, and my understanding of COD is very partial as I don't sit in production for any of the games.

I think COD doesn't often do a lot of "marketing" of its technology compared to other games, and I guess it makes sense for a title that sells so many copies to focus its marketing elsewhere and not pander to us hardcore technical nerds, I love the work Activision does on the trailers in general (if you haven't seen the live-action ones, you're missing out), but still it's a shame that very few people consider what tricks come into play when you have a 60hz first person shooter on ps3.


Certainly, -I- didn't! But my relationship with COD is also kinda odd, I was fairly unaware/uninterested in it till someone lent me the DVD I think of MW2 for 360 a long time after release, and from there I binge-played MW1 and Black Ops... Strictly single-player, mostly for their -great- cinematic atmosphere (COD MW2 is probably my favorite in that regard of previous-gen, together with Red Dead Redemption), and never really trying to dissect their rendering.
By the way, if you missed on COD single player, I think this critique of the campaign over the various titles is outstanding, worth your time.

Thing is, most games near the end of the 360/ps3 era went on to adopt new deferred rendering systems, often without, in my opinion, having solid reasons to do so. 

COD instead stayed on a "simple" single-pass forward rendering, mostly with a single analytic light per object. 
Not much to talk about at Siggraph there (with some exceptions), but with a great focus on mastering that rendering system: aggressive mesh splitting per lights and materials (texture layer groups), an engine -very- optimized to emit lots of drawcalls, and lightmapped GI (which manages to be quite a bit better than most of no-GI many-light deferred engines of the time).

Modern Warfare 2

IMHO a lesson on picking the right battles and mastering a craft, before trying to add a lot of kitchen-sink features without much reasoning, which is always very difficult to balance in rendering (we all want to push more "stuff" in our engines, even when it's actually detrimental, as all unneeded complexity is).

Call of Duty: Ghosts

The way I see Ghosts is as a transition title. It's the first COD to ship on next-gen consoles, but it still had to be strong on current-gen, which is to be expected for a franchise that needs a large install base to be able to hit the numbers it hits. 

Developers now had consoles with lots more power, but still had to take care of asset production for the "previous" generation while figuring out what to do with all the newfound computational resources.

For Ghosts, Infinity Ward pushed a lot on the geometrical detail rather than doing much more computation per pixel and that makes sense as it's easier to "scale" geometry than it is to fit expensive rendering systems on previous-gen consoles. This came with two main innovations: hardware displacement mapping and hardware subdivision (Catmull-Clark) surfaces.



Both technologies were a considerable R&D endeavor and were presented at Siggraph, GDC and in GPU Pro 7.

Albeit both are quite well known and researched, neither was widely deployed on console titles, and the current design of the hardware tessellator on GPUs is of limited utility, especially for displacement, as it's impossible to create subdivision patterns that match well the frequency detail of the heightmaps (this is currently, afaik, the state-of-the-art, but doesn't map to tiled and layered displacement for world detail).



Wade Brainerd recently presented with Tim Foley et al. some quite substantial improvements for hardware Catmull-Clark surfaces and made a proposal for a better hardware tessellation scheme at this years' Open Problems in Computer Graphics.

A level in COD:Ghosts, showcasing displacement mapping

For the rest, Ghosts is still based on mesh-splitting single-pass forward shading, and non-physically based models (Phong, that the artists were very familiar with).

Personally, I have to say that IW's artists did pull the look together and I Ghosts can look very pretty, but, in general, the very "hand-painted" and color-graded look is not the art style I prefer.

Call of Duty: Advanced Warfare

Advanced Warfare is the first COD to have its production completely unrestricted by "previous-gen" consoles, and compared to Ghosts it takes an approach that is almost a complete opposite, spending much more resources per pixel, with a completely new lighting/shading/rendering system.

At its core is still a forward renderer but now capable of doing many lights per surface, with physically based shaders. It does even more "mesh splits" (thus more drawcalls) and generates a huge amount of shader permutations to specialize rendering exactly for what's needed by a given piece of geometry (shadows, lighting, texturing and so on...).

We did quite a bit of research on the fundamental math of PBR for it and it employs an entirely new lightmap baking pipeline, but where it really makes a huge difference, in my opinion, is not on the rendering engine per se, but on its keen dedication to perceptual realism.



It's a physically based renderer done "right": doing PBR math right is relatively "easy", teaching PBR authoring to artists is much harder (arguably, not something that can be done over a single product, even) but making sure that everything makes (perceptual) sense is where the real deal is, and Sledgehammer's attitude during the project was just perfect.

Math and technology of course matter, and checking the math against ground-truth simulations is very important, but you can do a perfectly realistic game with empirical math and proper understanding of how to validate (or fit) your art against real-world reference, and you can do on the other hand a completely "wrong" rendering out of perfectly accurate math...

"Squint-worthy" is how Sledgehammers's rendering lead Danny Chan calls AW's lighting quality.
Things make sense, they are overall in the right brightness ratios

Advanced Warfare did also many other innovations, Jorge Jimenez authored a wonderful post-effect pipeline (he really doesn't do anything if it's not better than state of the art!) and AW also took a new version of our perpetually improving performance capture (and face rendering) technology, developed in collaboration with ICT

But I don't think that any single piece of technology mattered for AW more than the studio's focus on perceptual realism as a rendering goal. And I love it!

Call of Duty: Black Ops 3 

I won't talk much about Black Ops 3, also because I already did a post on Siggraph 2016 where we presented lots of rendering innovations done during the previous few years. 
The only presentation I'd like to add here, which is a bit unrelated to rendering, is this one: showing how to fight latency by tweaking animations.

Personally I think that if Sledgehammer's COD is great in its laser focus on going very deep exploring a single objective, Black Ops 3 is just crazy. Treyarch is crazy! Never before I've seen so many rendering improvements pushed on such a large, important franchise. Thus I wouldn't know really where to start... 

BO3 notably switched from forward to deferred, but I'd say that most of what's on screen is new, both in terms of being coded from scratch and often times also in terms of being novel research, and even behind the scenes lots of things changed, tools, editors, even the way it bakes GI is completely unique and novel.



If I had to pick I guess I can narrow BO3 rendering philosophy down to unification and productivity. Everything is lit uniformly, from particles to volumetrics to meshes, there are a lot less "rendering paths" and most of the systems are easier to author for.


All this sums up to a very coherent rendering quality across the screen, it's impossible to tell dynamic objects from static ones for example, and there are very few light leaks, even on specular lighting (which is quite hard to occlude).

Stylistically I'd say, to my eyes, it's somewhere in between AW and Ghosts. It's not quite as arbitrarily painted as Ghosts, and it is a PBR renderer done paying attention to its accuracy, but the data is quite more liberally art directed, and the final rendered frames lean more towards a filmic depiction than close adherence to perceptual realism.

Call of Duty: Infinite Warfare

...is not out yet, and I won't talk about its rendering at all, of course!

It's quite impressive though to see how much space each studio has to completely tailor their rendering solutions to each game, year after year. COD is no Dreams, but compared to titles of similar scope, I'd say it's very agile.

So rest assured, it's yet again quite radically different than what was done before, and it packs quite a few cute tricks... I can't imagine that most of them won't appear at a Siggraph or GDC, till then, you can try to guess what it's trying to accomplish, and how, from the trailers!



Three years, three titles, three different rendering systems, crafted for specific visual goals and specific production needs. The Call of Duty engines don't even have a name (not even internally!), but a bunch of very talented, pragmatic people with very few artificial constraints put on what they can change and how.

14 March, 2015

Design Optimization Landscape


  • How consciously do we navigate this?
    • Knowledge vs Prototyping
    • Width vs Depth of exploration
    • Speculation is "fast" to move but with uncertainty
    • Application focuses and finds new constraints, but it's expensive
  • Multidimensional and Multiobjective
  • Fuzzy/Noisy, Changing over time
  • We are all optimizers
    • We keep a model of the design landscape, updated by information (experiments, knowledge). Biased by our psychology
    • We try to "sample" promising areas to find a good solution
    • Similar to Bayesian Optimization (information directed sampling)
Bayesian Optimization. Used in black-box problems that
have a high sampling (evaluation) cost.

27 February, 2015

Why the rendering in The Order 1886 rocks.

Premise: Initially I thought I would post an "analysis" similar to the one I did on Battlefield a while ago, just my personal notes and screenshots taken as I played the game, isolated from any outside source of information (discussions between coworkers and other people, which I bet are happening everywhere right now across the industry). 
In the end I took an even more "high level" approach, so these notes won't really talk about techniques and speculations (or at least, that's not the main point of view).

This is also because I am quite persuaded nowadays that being focused on what matters in an image, being really anal about image quality (correctness, perception) matters more than the specific techniques.

Certainly it matters more than "blindly" implementing a checklist of cool technology, better do less and even remove features, but be really conscious about what makes an image and why, rather than trying to add more and more fancy things without understanding.

Also, in the following I'll often say "you can/can't notice...". I have to specify, because it matters, that I mean to analyze things that you can notice during mostly "normal" gameplay (in a dark room, with a good projector on a sizeable screen), albeit undertaken by a rendering nerd. 
Which is different from the work of actually trying to pixel-peep and reverse and break everything on purpose (not that it's easy anyhow in this game...). Which is -still- different from the (very hard) work needed to find all mistakes, as many are don't register rationally, but still are perceptually important.

1) Image stability. Technology that doesn't "show".

Everybody is raving about The Order's antialiasing, and rightfully so. We know it's some sort of 4x MSAA (nowadays that still leaves open a lot of variables on how it's implemented...) and that certainly helps a lot.
But it's not just 4x MSAA, we've seen many titles with MSAA, and you can even crank it on PC titles, yet I'd say nothing before came close to the "offline" rendering feeling The Order exhibits.

The air ship level is particularly impressive. Thin wire AA? Neat use of alpha-test?
And I think that's not just supersampling, but it's paying attention to pixel quality overall. Supersample what can be supersampled, to the extent it can be, filter out the rest, or move it into noise. It's not just antialiasing, it's the whole post-effect pipeline that works together (if you pixel-peek you can actually notice how the motion blur sometimes actively helps killing a tiny bit of leftover specular shimmering). 

Do less (in this case, -show- less pixel details) but better (more stable).

Some of the blurs can hinder a bit gameplay, but the image quality is undeniable.
And while I'm not personally against screen-space effects, in The Order they are noticeable for their absence. I spent some time trying to see if they had SSAO, SS-reflections and so on, and I didn't see anything.
And then you realize, all these techniques, at least as they have been implemented so far, can be spotted, they are not stable, they have telling artifacts.

If a rendering technique "shows", it's already a sign that there's something wrong (e.g. you can say this image has limited depth of field, but if you can spot, this image uses a separable blur, then that's already a problem).

The Order is remarkable even in that regard, a very technical, trained eye might form some educated guesses, but very vaguely I'd say it's really hard to pinpoint most of the specific techniques.

2) Occlusions. Lighting without leaking.

I always say, better to have missing lights that light where there shouldn't be.


Even in photography it's easier to see added light than "removed".
Even if you can't spill due to lack of shadowing...
Behind the scenes from Peter Lindbergh and Gregory Crewdson.
Even the unfortunate concession we sometimes do to gameplay, of adding "character" lights to better separate them from the background, visually can be quite "disturbing", but there it is by design - it's supposed to show.

Dark Souls 2: not the first, nor the last game to highlight characters.
Don't add an unshadowed light if it's noticeable. And it will -always- be noticeable if it leaks behind surfaces.
While for example a "hair light" on a character in a cinematic can be quite hard to register, a ceiling light shining under a table "disconnects" surfaces and kills realism.

Nowadays occlusion is becoming "easier" with the ability of rendering more shadowmaps (albeit people still complain about the intricacies of shadowmapping) and with more memory many things can be cached as well.
Static, diffuse indirect global illumination is also not a huge deal (e.g. lightmaps and probes).

But specular will kill you. It's quite an hard problem. Bright highlights shining around object silhouettes, behind walls and so on, very tricky to occlude with ambient occlusion and such non-strongly directional methods exactly due to their intensity.

If you've ever played with screen-space reflections you might have noticed, more than solving the problem of missing reflections, they are useful because they effectively capture the right occlusion (together with reflection) of objects in contact with surfaces (which won't be captured with a cubemap probe, even after parallax correction as a box).

This recent "Paris" scene, done in Unreal, albeit very nice, clearly shows
how specular is hard and screenspace is not enough: see it in motion.
Again, The Order doesn't reveal its hand, whatever it uses, it just works.

The importance of specular leaks, together with the fact that it's better to over-occlude than to leak, is always why I think if you can't do anything more, at least baking bent normals (even to the point of having only bent normals, without carrying two sets) pays off.

3) Atmospherics. Shading the space in-between surfaces.

We always had some atmospherics. Fog, "ground" fog, "god" rays... 

What The Order does there is quite interesting though. London is foggy, and the air is not some kind of afterthought special effect. It's a protagonist in the shading of the image.

It's a recurring theme by now, but again, you can't really "see" it as a special effect or as a specific technique. It's a meaningful contribution to the image that it's much more subtle and harder to capture.


The surprising thing to me is how actually I couldn't really notice dynamic occlusion effects in the fog (volumetric shadows or god rays). Rather what surprises is not such effects but the fact that -every- light scatters, everything subtly changes the color of the fog.

And again the lack of artifacts. You can't see voxels, you can't see leaks, you can't notice particles rotating and fading in and out. Marvellous.

4) Materials. Attention to detail.

To each technique its own abuses. First was bloom. Then lens flares. Then depth of field and so on...
There's always a new "must have" feature that gets cranked to eleven to be "cool" and hip until everyone gets disgusted and dials back, just to latch to something else. 

Stop.
For PBR it seems metals can be problematic.  More than one game I've seen nowadays that pushes shiny, perfectly polished stuff everywhere, I can't really understand why.

Dragon Age Origins: a wonderful game done with a wonderful engine.
But it really has some issues with shiny metals, the Orlais palace is one of the worst offenders.
The Order instead doesn't ever lose its composure. Materials are never constant, are never flat, they are varied and realistic and realistically blend into each other. Even small details, light lightbulbs and refraction, or the sheen of the ink on printed paper, is great.
Texture resolution is constant, you never notice issues between textures of nearby objects with different densities.

Physically based rendering, done mindfully.

Bonus round: baking.

Many games, especially around the period of time when deferred got all the hype, lost many of these points (made them worse) by sacrificing baked solutions to real-time computations, often in a misguided attempt of solving authoring problems (which should be the domain of better -tools-, not runtime changes).

Real-time solutions are still useful when gameplay requires dynamic scenes and updates (and we can't stream these!), but if you need realtime feedback on GI, write a realtime path tracer for your lightmaps...

Bonus round: F+.

The Order 1886 is famously based on a forward+ lighting engine. I actually wonder how much different it would have been on a old-school "splitting" (static light assignment, to geometry) forward renderer with generous amounts of baking of "secondary" lights. Don't know.

Bonus round: Next-gen and PC hardware.

GPU power won't matter until we can specifically target it.
The Order shows that, and that's why a PS4 title can look better than other games which were still targeting the previous console generation and then where upgraded with a few extra techniques to push high-end PC GPUs.

Pushing certain marginal stuff to eleven doesn't make quite as much of a quality difference than creating assets and effects specifically for more powerful GPUs, and that's why even if PCs were already more powerful than these consoles at launch, we have to wait for console games to improve in order to see real advances in games graphics.

30 August, 2014

I Support Anita


One of the first things you learn when you start making games is to ignore most of the internet. Gamers can undoubtedly be wonderful, but as in all things internet, a vocal minority of idiots can overtake most spaces of discourse. Normally, that doesn't matter, these people are irrelevant and worthless even if they might think they have any power in their jerking circles. But if after words other forms of harassment are used, things change.

I'm not a game designer, I play games, I like games, but my work is about realtime rendering, most often the fact it's used by a game is incidental. So I really didn't want to write this, I didn't think there is anything I can add to what has already been written. And Anita herself does an excellent job at defending her work. Also I think the audience of this blog is the right target for this discussion.

Still, we've passed a point and I feel everybody in this industry should be aware of what's happening and state their opinion. I needed to make a "public" stance.

Recap: Anita Sarkeesian is a media critic. She began a successful kickstarter campaign to produce reviews of gender tropes in videogames. She has been subject to intolerable, criminal harassment. People who spoke in her support have been harassed, websites have been hacked... Just google her name to see the evidence.

My personal stance:
  • I support the work of Anita Sarkeesian. As I would of anybody speaking intelligently about anything, even if I were in disagreement.
  • I agree with the message in the Tropes Vs Women series. I find it to be extremely interesting, agreeable and instrumental to raise awareness of an in many cases not well understood phenomenon. 
    • If I have any opinion on her work, is that I suspect in most cases hurtful stereotypes don't come from malice or laziness (neither of which she mentions as possible causes by the way), but from the fact that games are mostly made by people like me, male, born in the eighties, accustomed to a given culture. 
    • And even if we can logically see the issues we still have in gender depictions, we often lack the emotional connection and ability to notice their prevalence. We need all the critiques we can get.
  • I encourage everybody to take a stance, especially mainstream gaming websites and gaming companies (really, how can you resist being included here), but even smaller blog such as this one. 
    • It's time to marginalize harassment and ban such idiots from the gaming community. To tell them that it's not socially acceptable, that most people don't share their views. 
    • Right now for most of the video attacks (I've found no intelligent rebuttal yet) to Tropes Vs Women are "liked" on youtube. Reasonable people don't speak up, and that's even understandable, nobody should argue with idiots, they are usually better left ignored. But this got out of hands.
  • I'm not really up for a debate. I understand that there can be an debate on the merit of her ideas, there can be debate about her methods even, and I'd love to read anything intelligent about it. 
    • We are way past a discussion on whether she is right or wrong. I personally think she is substantially right, but even if she was wrong I think we should all still fight for her to be able to do her job without such vile attacks. When these things happen, in such an extent, I think it's time for the industry to be vocal, for people to stop and just say no. If you think Anita's work (and her as a person) doesn't deserve at least that respect, I'd invite you to just stop following me, seriously.
Links:

26 February, 2014

Valve VR. I want to believe.

Premise: Spoiler alert, I guess. Albeit it might not matter, it's worth noting that if you're going to experience either Valve's or Rift's Crystal Cove demo anytime soon (GDC perhaps) you could want to consider approaching it without any preconceived notion that this or other articles might give. Also, lengthy as usual.

Introduction
Valve fanboy, I am not. In fact I might say I still hold a grudge against it, I want Valve to make games, not stickers to slap on PCs in a marketing stunt with the hope of moving some units of a platform in general decline. I understand that Steam makes money, but on a personal level I can't really care...

So this morning, after waking up way too early for my habits I arrive at Valve's building I can't help muttering "so cheap" as I look for the street number and make sure that I'm in the right place. Of course it's silly of me as money is not the reason why they didn't bother to put a logo on the facade nor the name on the directory, but anyhow I digress... The "I" in the titular "I want to believe" here refers not to my faith, but to how my brain reacted to the system.

Now, it's hard to put in words an experience, especially something novel as this is, as there aren't easy analogies to make and frames of reference in most people's experiences to anchor to. I'll try my best to explain how it works, or how I think it works.

Visuals are a tricky beast. Rendering engineers are often deeply embedded in very technical concerns, but at the end of the day, what really matters is psychology. What happens, when stars align and things go right (because, mind you, we're really far from making this a science and the little science there is most of the times is still very far from mingling with entertainment) is that we create a visual experience that somehow tickles the right neurons in our brain to "evoke" a given atmosphere, sensations, feelings that we learned in real life and are "replayed" by these stimuli. 
Mostly for me that happened with environments, I guess because we're really discriminating when it comes to characters: Call of Duty, Red Dead Redemption, Kentucky Route Zero are all great examples.

When we try to achieve this via photorealism we hope that by simulating what's perceptually important, what we can notice in a scene, the light, the materials, the shapes and so on, we reach a point where our brain accepts the image we generated as a reproduction of reality, like a photo. And we hope that our artists can use this tool that gets us close to reality to more easily be able to fool our brains into producing emotions, because we're nearer to the paths that normally fire when we experience things in the real world.

Inside Virtual Reality
A good VR experience completely sidesteps all this, Abrash says it right, when things align in VR you get presence and it's an infinitely more powerful tool. Presence is the sensation that you are there; That virtual wall is at ten meters from you. And it is really unquestionable. Realism doesn't matter anymore.
The VR prototype suffered from all kinds of defects: it's clearly "low resolution", a lot of demos didn't have lighting, most did do only lightmaps or diffuse, not specular, most had no textures, I could see banding and even some aliasing, I could spot errors in the lightmaps, even quite clearly the ghosts from the OLEDs and so on and on. A nightmare for a rendering engineer, on a 2d screen you would have said the worst visuals ever.
Yet you were fooled into thinking you were there, not through realism but through the immersion that is possible with the low-latency, full head tracking (rotation AND position) stereo sauce Valve has implemented. I suspect there are a million things to get just right, we discussed how OLEDs provided enough dynamic range that you didn't question the scene much for example and they have a catalog of things that are crucial and things you can ignore.

The most succinct way to describe this is that in all medias before this, at best you had the impression of looking at a greatly detailed, technologically advanced reproduction of reality (think of the best immersive stereo projection movie you've ever seen for example). Here even when you're looking at really basic renderings you think you are in some sort of weird, wrong alternate world, similar if you wish to certain installations, rooms that play with light and shapes to create some very unusual experiences.

The demo environments Valve created (or actually I should say, I witnessed) were quite tame. Clearly they didn't want to push it to avoid certain people reacting negatively to the experience. Most of the time the scene was static, not in the sense of devoid of animation but as in a fixed room you could move in but that wasn't moving relative to you. Things never got scary and I didn't interact in any way with the simulation (even if I, in numerous times, instinctively went reaching with my hands to objects and avoided objects getting to near me), yet there were some intense moments.

I guess a few people now described a scene where you're in the simplest room possible, no shading, no lighting, yet you start on a small ledge and you can't avoid feeling vertigo and have to actively force yourself to step into the "void". You know it's not real, at an intellectual level. You have all the possible visual hints saying it's not real, yet your brain tells you otherwise.
Another, switching for the first time to a scene with some animated robots, at the moment of the switch I had a second of high alertness, as your primitive brain steps in and rushes to prepare for a suspicious activity.
The weirdest sensation was at the end, flying (very slowly, as apparently motion creates quite easily discomfort) through CDAK. There the visuals were distorted enough (see the youtube video linked) that I didn't feel as much presence (so there are some extreme cases where visuals can break it), yet when some of the weird blue objects passed through me I had, again for a split second, a sensation that I could only later rationalize as reminiscent somehow of being in a sea, I guess because that's the only thing I have in my experience of going through something like that.

Practicalities
When does presence break? Visuals can be pushed quite far before they break presence. Mind you, rendering will be a huge issue there and good rendering I am sure does make a difference to remove the idea that you are immersed in something quite odd, but again it simple visuals don't break immersion. I would also have loved to see a scene with and without various rendering effects (visual hints) but alas, no such luck.
Even the low resolution and blur and ghosts are to me defects of my vision, like seeing through glasses or through a dirty motorbike helmet, not a problem of the "reality" of the scene. Impossible behaviours do. In one of the scenes for example there were some industrial machines at work behind a glass wall. Poking my head through the wall into the room breaks it. 
It's not like suddenly losing stereo, as in one of the cross-eyed stereograms where you focus on the page and lose the effect. The closest analogy I can think of is Indiana Jones' leap of faith, and you might have experienced something similar with some visual tricks in theme parks: you realize it's an illusion.

There are a myriad of ways of doing something that breaks the presence, many things that are totally acceptable on a normal display are intolerable in VR. You might know that you can't really use normal-maps or any other non-stereoscopic hints of detail for example, but you are also much more aware of errors in animation, physics, not to mention characters (which weren't demoed at all).
And it's good if the consequence of an error is only bringing back to the idea you're in a VR helmet, the bad cases are when certain hints are very strong, but certain others are completely wrong or missing, like falling without acceleration, wind and so on, as these can cause discomfort.

Conclusion
In conclusion, it was better than I expected, as I expected all the visual issues to have a bigger impact. I think it can be used to create incredible, amazing experiences, that will feel like nothing ever felt before. And it obviously has a lot of applications outside entertainment as well.
I think and hope all this research will also be useful for traditional image synthesis, as for the first time we really have to systematically study perception and how our brain works, and not just be lucky with it. Also certain technological advances, for example in low-latency rendering system, will directly apply to traditional games as well.

I also think that it will be still for a long while a very niche product, or if it will succeed it will be due to a killer app that doesn't look in any shape or form like a traditional game, as if for certain technological issues we can clearly see a roadmap (weight, tracking, resolution, lag and so on), for certain others we don't have any idea yet, mostly controls but also how to deal with all the situations where we our brain is accustomed to have more sensorial hints than just what the eyes tell.
Even tiny things, like just the fact that with position tracking we can compenetrate with everything is quite an issue to solve. Fast movement is hard as well, it exacerbates the technical issues (lag, refresh rates and so on) to a degree that even "cockpit" games are hard (not to mention the lack of acceleration), even worse if you have to move your body in any athletic way as it's easy to get the VR system out of the optimal alignment it needs for crisp vision.

I don't think games can be "ported", a FPS in VR will be much more of a gimmick than e.g. FPS with virtual joysticks on an iPhone. We will need radically new stuff, low movement (for now at least, later on maybe some cockpit games can work well enough for the masses), novel ways of interaction (gaze for example can work decently, wands do work great... kinect-like stuff is very laggy and thus limited to only gesture recognition, not direct manipulation right now), new experiences...

It will probably be for a few early adopters, but I'm quite persuaded to be among them, just to be able to create weird environments that feel real.

P.S. I saw the number "3" multiple times in Valve offices. You certainly know what that means...

Update: Sony's Morpheus prototype is worse than Oculus DK2 as far as I can tell, and Oculus DK2 is still quite a bit behind Valve's demoroom. It need much more resolution.

12 December, 2013

Shit people say: graphics have "peaked"

If you think that rendering has peaked, it's probably good. Probably it means you're not too old and haven't lived through the history of 3d graphics, where at every step people thought that it couldn't get better. Or you're too old and don't remember anymore...

Really, if I think of myself on my 486sx playing Tie Fighter back then, shit couldn't get any better. And I remember Rebel Assault, the first game I bought when I had my first CD-rom reader. And so on and on (and no, I didn't play only Star Wars games, but at the time LucasArts was among the companies made all must-buy titles... until the 360 I've always been a "computer" gamer, nowadays I play only on consoles).

But but but, these new consoles launched and people aren't that "wowed" right? That surely means something. We peaked, it happened.

I mean, surely it is not that when the 360 and later PS3 came out games weren't looking incredibly much better than what we had on ps2, right? (if you don't follow the links, you won't get the sarcasm...). And certainly, certainly the PS2 launch titles (was touted as more powerful than a SGI... remember?) it blew late PS1 titles right out of the water. I mean, it wasn't just more resolution.

Maybe it's lack of imagination. As I wrote, I was the same, many times as I player I failed to imagine how it could get better. To a degree I think it's because video-game graphics, like all forms of art, "speak" to the people of their time, first and foremost. Even if some art might be "timeless" that doesn't imply that its meaning remains constant over time, it's really a cultural, aesthetic matter which evolves over time.
Now I take a different route, which I encourage to try. Just go out, walk. See the world, the days, the nights. Maybe pick up a camera... How does it feel? To me, working to improve rendering, it's amazing. Amazing! I could spend hours walking around and looking in awe and envy at the world we can't yet quite capture in games.
Now think if you could -play- reality, tell stories in it. Wouldn't it be a quite powerful device? Won't it be the foundation for a great game?

Stephen Shore, one of the masters of American color photography

Let me be explicit though, I'm not saying that realism is the only way, in the end we want to evoke emotions, and that can be done in a variety of ways, I'm well aware. Sometimes it's better to illustrate and let the brain fill in the blanks, emotions are tricky. Take that incredible masterpiece that is Kentucky Route Zero which manages to use flat-shaded vector graphics and still feel more real than many "photo-realistic" games. 
It's truly a game that every rendering engineer (and every other person too) should play, to be reminded of what are the goals we are working for: pushing the right buttons in the brain and trick it to remember or replay emotions it experienced in the real world. 
Other examples you might be more accustomed to are Call of Duty (most of them) and Red Dead Redemption, two games that are (even if it's very questionable actually) not as technically accomplished as some of the competition but manage to evoke and atmosphere that most other titles don't even come close to.

At the end of the day, photo-realism is just a "shortcut", as if we have something that spits realistic images for every angle and every lighting, it's easier to focus on the art, the same way that it's cheaper to film a movie rather than hand paint every frame. It's a set of constraints, a way of reducing the parameters space from the extreme of painting literally every pixel every frame to more and more procedural models where we "automate" a lot of the visual output and allow creativity to operate on the variables left free to tuning (i.e. lighting, cinematography and so on). 
It is -a- set of constraints, not the -only- one. It's just a matter of familiarity, as we're trying to fool our brains into firing the right combinations of neurons, it makes some sense to start with something that is recognizable as real, as our lives and experiences are drawn from real world. But different arguments could be made (i.e. that abstraction helps this process of recollection), this would be the topic of a different discussion. If your artists are more comfortable working in different frameworks there is a case to be made for alternatives, but when even Pixar agrees that physics are a good infrastructure for productive creativity then you have a quite strong "proof" that it's indeed a good starting point.


Diminishing returns... It's nonsense. Not that it doesn't exist as a phenomenon, but we are still far from being there in terms of effort vs quality, and there are many ways to mitigate it in asset production as well (money vs content, which will then hopefully relate to money). 
As I said, everyday I come back home from the office, and every day (or so) I'm amazed at the world (I'm in Vancouver, it's pretty here) and how far we still have to go to simulate all this... No, it's not going to be VR the next step (Oculus is amazing, truly, even if I'm still skeptical about a thing you have to wear and for which we have no good controls), there is still a lot to do on a 2d screen, both in rendering algorithms and in pure processing power. 
Yes we need more polygons please. Yes we need more resolution. And then more power on top of that to be able to simulate physics, and free our artists from the shackles of needing to eyeball parameters and hand-painted maps and so on...

And I don't even buy the fact that rendering is "ahead" and other things "lag" behind. How do you even make the comparison?
AI is "behind" because people in games are not as smart as humans? Well, quite unfair to the field, I mean, trying to make something look like a photo, versus something behave like a human, seems to be a bit easier to me.
Maybe you could say that animation is behind because well, things look much worse in motion than they do when they are static. But, not only part of that is a rendering problem, but it just says exactly that, things in motion are "harder" than static things, it doesn't mean that "motion" lags behind as a field...
Maybe you can say we implemented more novel techniques in rendering than we did in other fields, animation didn't change that much over they years, rendering changed more. I'm not entirely sure it's true, and I'm not entirely sure it means that much anyways, but yes, maybe we had more investment or some games did, to be more precise.

Anyhow, we still suck. We are just now beginning to understand the basics of what colors are, of what materials are, how light works. Measure, capture, model... We're so ignorant still. Not to mention on the technical side. Pathetic. We don't even know what to do with most of the hardware yet (compute shaders? for what?).

There could be an argument that spending more money on rendering is not worth it - because spending them on something else now gets us more bang for the buck, which is a variation of the "rendering is ahead" reasoning that doesn't hinge on actually measuring what is ahead of what. I could consider that, but really the reason for it is just that it's harder to disprove. But on the other hand, it's also completely random! 
Did we measure this? That would be actually fascinating! Can we devise an experiment where we can turn a "rendering" know and an "animation" or "gameplay" know and see what are people most sensitive to? I doubt it, seriously, but it would be awesome.
Maybe we could do some market research and come up with metrics that say that people buy more games if they have better animation over rendering, but... I think rendering actually markets better (that's why companies name and promote their rendering engines, but not their animation ones).

Lastly, you could say, it's better to spend money somewhere else just because it seems that rendering is expensive and maybe the same money just pays so much more innovation somewhere else. Maybe. This still needs ways of measuring things that can't be measured, but really the thing is some people are scared that asset costs will still go up and up. Not really "rendering" costs, but "art" costs. Well -rendering- actually is the way to -lower- art costs. 
No rendering technique is good if it doesn't serve art better, and unfortunately even there we still suck... We are mostly making art the same way we always did, triangles, UVs, manually splitting objects, creating LODs, grouping objects and so on. It's really sad, and really another reason to be optimistic about how much still we have to do in the future.

Now, I don't want to sound like I'm saying, I'm a rendering guy, my field is more relevant and all the money should go to it. Not at all! And actually I'm passionate of a lot of things, animation for example is fascinating as well... and who knows, maybe down the line I'll do stuff that it's completely different than what I'm doing today... I'm just annoyed that people say thing that are not really based in facts (and as we're at it, let's also dispel the myth that hardware progress is slowing down...).

Cheers.