Search this blog

28 April, 2019

On the “toxicity” of videogame production.

I was at a lovely dinner yesterday with some ex-gamedev friends and, unsurprisingly, we ended up talking about the past and the future, our experiences in the trenches of videogame production. It reminded me of many discussions I had on various social media channels, and I thought it would be nice to put something in writing. I hope it might help people who want to start their career in this creative industry. And perhaps even some veterans could find something interesting in reading this.

- Disclaimer.

These are my thoughts. Duh, right? Obvious, the usual canned text about not representing the views of our corporate overlords and such? Not the point.
The thing I want to remind you before we start is how unknowable an industry is. Or even a company, a team. We live in bubbles, even the ones among us with the most experience, with most curiosity, are bound by our human limits. That’s why we structure large companies and teams in hierarchies, right? Because nobody can see everything. Of course, as you ascend them you get more of a broad view, but from these heights, the details are quite blurry, and vice-versa, people at the “bottom” can be very aware of certain details but miss the whole. 

This is bad enough that even if internally you try hard, after a success or a failure, to understand what went right or wrong, most of the times you won’t capture objectively and exhaustively these factors. Often times we don’t know at all, and we fail to replicate success or to avoid failing again.

Staring at the production monster might drive you insane.

So, I can claim to be more experienced than some, less than some others, it truly doesn’t matter. Nobody is a source of truth in this, the best we can do is to bring a piece of the puzzle. This is, by the way, a good attitude both towards oneself, to know that we probably have myriads of blind spots, but also key to understand what other people say and write. Even the best investigative journalists out there can at best report a bit of truth, an honest point of view, not the whole picture. 

To name names, for example, think about Jason Schreier, whom I admire (have you read “blood, sweat and pixels”? You should...) for his writing and his ability to do great, honest research. His work is exemplary, and still, I think it’s partial. In some cases, I know it is.

And that is ok, it’s intellectual laziness to think we can read some account and form strong opinions, know what we’re talking about. Journalism should provide a starting point for discussion, research, and thought. It’s like doing science. You chip away at the truth, but one single observation, no matter the prestige of the lab, means very little. 
And if we need multiple studies to confirm something as simple as science, where things are objective, measurable and unchanging, think how hard is the truth when it comes to entities made of people…

- Hedging risk.

One thing to understand is where the risk for abuse comes from. And I write this first not because it should be a personal responsibility to avoid abuse, but because it’s something that we don’t talk about. Yes, there is bad management, terrible things do exist, in this industry as in others, and they have to be exposed, and we have to fight. But that doesn’t help us to plan our careers and to take care of ourselves. 

So, where does the potential for abuse come from? Simply, imbalance of power. If you don’t have options, you are at risk, and in practice, the worst companies tend to be the ones with all the power, simply because it’s so simple to “slip” into abusing it. Sometimes without even truly realizing what the issue is.

So, you should avoid EA or Activision. Nintendo, Microsoft and Sony, right, the big ones? No, that’s not the power I’m talking about, quite the opposite. Say you are an established computer engineer working for EA, in its main campus in the silicon valley, today. Who has power, EA or you, when Google, Facebook et al are more than eager to offer you a job? I’d say, as an educated guess, that the most risk comes in medium-sized companies located in countries without a big game industry, in roles where the offer is much bigger than the demand. 

Does that mean that you should not seek a career in these roles, or seek a job in such companies? Definitely not, I started exactly like that, actually leaving a safer and even better-paid job to put myself in the above-mentioned scenario. It’s not that we shouldn’t do scary and dangerous things, but we have to be aware of what we are doing and why. My better half is an actress, she’s great and I admire her ambition, work ethic, and courage. Taking risks is fine when you understand them, you make conscious choices, you have a plan, and that plan should also include a path to stability.

- Bad management or creative management?

Fact. Most great games are done in stressful conditions. Crunch, fear, failure, generally the entire thing being on fire. In fact, the production of most great games can be virtually indistinguishable from the production of terrible games, and it’s the main reason why I advise against choosing your employer only based on your love of the end product.

This I think is remarkable. And often times we are truly schizophrenic with our judgment and outrage. If a product fails, we might investigate the reasons for its failure and find some underlying problems in a company’s work conditions. Great! But at the same time, when products truly succeed we have the ability to look at the very same patterns and not just turn a blind eye to them, but actively celebrate them. 
The heroic story of the team that didn’t know how to ship, but pulled all-nighters, rewrote the key system and created the thing that everyone remembers to this day. If we were to look at the top N games of all time, how many would have these stories behind their productions?

Worse, this is not just about companies and corporations. Huge entities, shareholders, due dates and market pressure. It happens pretty much universally, from individual artists creating games with the sole purpose of expressing their ideas to indie studios trying to make rent, all the way to Hollywood-sized blockbuster productions. It happened yesterday, it happens today. Will it happen in the future? Should it?

- The cost of creativity.

One other thing to realize is how this is not a problem of videogame production, at all. Videogames don’t have a problem. Creative products do. Look at movies, at actors, film crews. Visual effects. Music? Theater? Visual arts? Would you really be surprised to learn there are exactly the same patterns in all these? That videogames are not the “worst” industry among the creative ones? I’m guessing you would not be surprised…

This is the thing we should really be thinking about. Nobody knows how to make great creative products. There is no recipe for fun, there is no way put innovation on a predictable schedule, there’s no telling how many takes will be needed to nail that scene in a movie, and so on. This is truly a hard problem, fundamentally hard, and not a problem we can solve. By definition, creativity, research, innovation, all these things are unknown, if we knew how to do them up-front, they would not be novel and creative. They are defined by their lack of predictability.

In keeping with movie references...

And I don’t know if we know where we stand, truly. It’s a dilemma. On one hand, we want to care, as we should, about the wellbeing of everyone. We might even go as far as saying that if you are an artist, you shouldn’t sacrifice yourself to your art. But should you not? Should it be your choice, your life, and legacy? Probably. 
But then we might say, it’s ok for the individual, but it’s not ok for a corporation to exploit and use artists for profit. When we create packaged products, we put creativity in a corporate box, it’s now the responsibility of the corporation to ensure the wellbeing of the employees, they should rise to higher standards. And that is absolutely true I would never question such fact.

Yet, our schizophrenia is still there. It’s not that simple, for example, we might like a given team that does certain products. And we might be worried when such a team is acquired by a large corporation because they might lose their edge, their way of doing things. You see the contradiction in that?

In general (in a very, very general sense), large corporations are better, because they are ruled by money, investors looking at percentages, often banks and other institutions that don’t really know nor care about the products. And money is fairly risk-averse, it makes big publishers cash on sequels, big franchises, incremental improvements and so on. All things that bring more management, that sacrifice creativity for predictability. Yet we don’t really celebrate such things, do we? We celebrate the risk takers, the crazy ones…

- Not an absolution.

So tl;dr; creativity has a cost in all fields, it’s probably something we can’t solve, and we should understand our own willingness to take risks, our own objectives and paths in life. Our options exist on a wide spectrum, if you can you should probably expose yourself to lots of different things and see what works best for you. And what works best will change as your life changes as well.

But this doesn’t mean that shitty management doesn’t exist. That there aren’t better and worse ways of handling risks and creativity, that there is no science and no merit. Au contraire. And ours, being a relatively new industry in many ways, certainly the youngest among the big creative industries, still has a lot to learn, a lot to discuss. I think everyone who has a good amount of production experience has seen some amount of incompetence. And has seen or knows of truly bad situations, instances of abuse and evil, as I fear will always be the case when things involve people, in general.

It’s our responsibility to speak up, to fight, to think and debate. But it’s also our responsibility to not fall into easy narratives, oversimplifications, to think that it’s easy to separate good and bad, to identify from the outside and at a glance. Because it truly isn’t and we might end up doing more harm than help, as ignorance often does.

And yes.
These are only my 2c.

07 April, 2019

How to choose your next job (why I went to Roblox)

This is one of those (rare?) posts that I wasn't sure how to write. I'm not a fan of talking about personal things here, and even more rarely do I write about companies.

But I too often see people, especially juniors entering the industry, coming with what I think are the wrong ideas of how looking for a job works, even making mistakes sometimes that lead to frustration, an inability to fit into a given environment, and can even make people want to quit an entire industry altogether.

By far, the number one mistake I see, are people who just want to go to work for projects that they are a fan of. In my industry, that means games they like to play. Not realizing that the end product does not really tell any story of how it was done and/or what your job will be like.

I do strongly advocate to try to follow your passions, that makes working so much better. And if you're lucky, your passion will even guide you to products you personally enjoy playing. But, that should not be - I repeat, SHOULD NOT BE - your first concern.


"Airship station"
I've been extremely lucky in my career. I have worked for quite a few companies, on many games. I have almost always landed in places I love. Working on projects I love. But only once I've actually worked for a franchise I play (Call of Duty, but even there, I play the single player only, so perhaps you could say I don't really play most of that either).

So, I'll do what most coaches do and elevate my small sample set, based on my personal experience, in a set of rules you might or might now want to follow. And at the end, also tell a bit about why I'm now working at Roblox. Deal? Good, let's go.

- Know thyself.

The first thing is to know yourself. Hopefully, if you paid attention and are honest, over the years you form an idea of who you are and what you like to do, what motivates you.
It's actually not easy, and many people struggle with it, but that might not be the end of the world either. If you don't know, then you at least know you don't and can reflect that in your education and career choices.

In my case, I think I could describe myself as follows:
  • I'm driven by curiosity. I love knowledge, learning, thinking. This is nothing particularly peculiar, if you look at theories of human desire and curiosity, gaining knowledge is one of the main universal motivators.
  • My own intellectual strength lies mostly in logical thinking. I have always been drawn to math, formal systems. This is not to say I'm an extraordinary mathematician, but I do find it easier to work when I can have a certain degree of control and understanding.
  • I love creativity, creative expression, and art, particularly visual arts. 
  • I'm a very social and open introvert. What this means is that I like people, but I've also always been primarily focused inwards, thinking, day-dreaming. Especially as a kid, I could get completely lost in my own thoughts. Nowadays, I try to be a more balanced person, but it's a conscious effort.
Ok, so what does all this mean? How does it relate to finding a job? Well, to me, since a very young age, it meant I knew I would either be an artist or a computer scientist. And that either way it would probably involve computers.
That's why I was involved as a kid in the demo scene. After high school, I decided I wasn't talented enough to make a living as an artist, and I chose computer science. In retrospect, I had a great intuition, 
as even today I struggle in my own art to go out of certain mental models and constraints. I might have been a good technical artist, who knows, but I think I made the right call. Good job, nerdy teenage me!

- Know thy enemy.

What you like to do, what you can offer. This second "step" matures as you gain more work experience, again, if you pay some attention. If you don't know yet, it's not a problem - it means you can know your objectives are probably more exploratory than mine. Your understanding is something that is ever-evolving.

What does all that psychological stuff above mean when it comes to a job? Well, for me it means:
  • I'm not a ninja, a cowboy, or a rockstar. I'm pretty decent with hacking code I hope, as you would expect from anyone with some seniority, but I'm not the guy that will cruise through foreign source, write some mysterious lines, and make things work. I need to understand what I'm doing to be the most effective, and I have to consciously balance my pragmatism with my curiosity.
  • On the other hand, I'm at my best when I'm early in a project. I gravitate towards R&D, solving problems that have unknowns. Assessing risks, doing prototypes, organizing work. Mentoring other people.
  • I don't care about technology or code per se. All are tools for a means to an end. I care about computer graphics, and that's what I know most about, but I am curious about anything. Even outside computer science. So, even in R&D, I would not work in the abstract, in the five-years out horizon, or on entirely theoretical matters. I rather prefer to be close to the product and people.
I'm a rendering engineer. At least that's what I've been doing for the past decade or so. But that's not enough. There are a million ways to be a rendering engineer. I think I'm best at working on novel problems, doing applied R&D, and doing so by caring about the entire pipeline, not only code.

There are another million ways to do this job and are all useful in a company. There's no better or worse. If you know what you can offer and like, you will be able to communicate it more clearly and find better matches. We are all interested in that, in finding the perfect fit. One engineer can do terribly at one company, and thrive in another. It's a very complex handshake, but it all begins in understanding what you need.

- Profit?

Note: I don't mean that everything I wrote above is something you have to think about any time you send a resume. First of all, you should probably always talk to people, and never limit yourself. Yes, really. Send that CV. No, I don't care what you're doing, the timing, the company, just send that CV and have a talk. You never know what you might learn, don't make assumptions.

Second, it's silly to go through all this explicitly, every time you think of a job. But. If you know all this, if along the way you took some effort to be a bit aware of things, you will naturally be more clear in your interactions and probably end up finding opportunities that fit you.

"Rip ur toaster"
Ok, let's now address the last point. Why Roblox? I have to be honest. I would not have written all this if a few people didn't ask me that question. Not many, most of my friends in the industry actually were very positive, heard good things, and actually made me more confident in the choice.
But in some cases, people didn't see immediately the connection between someone who has so far been doing only AAA games and almost only for consoles, and a company that makes a platform for games mostly aimed at kids, mostly on PC and mobile, and with graphics mostly made out of flat shaded blocks. So I thought that going through my point of view could be something interesting to write about.

Why Roblox and not, say Naughty Dog or Rockstar, Unity or Unreal? Assuming that I had a choice of course, in a dream world where I can pick...

Because I'm fascinated by the problem set.

Now, let's be clear. I'm writing this blind, I actually intended to write it before my first day, to be entirely blind. My goal is not to talk about the job or the company. Also, I don't want to make comparisons. I am actually a stern proponent of the fact that computer graphics are far from being solved, both in terms of shiny pixels and associated research and even more so in terms of the production pipelines at large.
Instead, I simply want to explain why I ended up thinking that flat shading might be very interesting. 

"Stratosphere Settlement"
The way I see it, Roblox is trying to do two very hard things at once. First, it wants to be everywhere, from low-powered mobile devices to PCs to consoles, scaling the games automatically and appropriately. Second, these games are typically made by creatives that do not have necessarily the same technical knowledge as conventional game studios. In fact, the Roblox platform is used even as a teaching tool for kids, and many creators start on the platform as kids.

This is a fascinating and scary idea. How to do graphics with primitives that are simpler than traditional DCC tools, but at the same time that render efficiently across so many runtimes? In Roblox, everything is dynamic. Everything streams, and start-up times are very important (a common thing with mobile gaming in general). There is no baking, the mantra for all rendering is that it has to be incremental, cached, and with graceful degradation.

And now, in this platform with these constraints, think of what you might want to do if you wanted to start moving more towards a conventional real-time rendering engine. What could you do to be closer to say, Unity, but retaining enough control to still be able to scale? I think one key idea is to constrain authoring in ways that allow attaching semantics to the assets. In other words, not having creators fully specify them to the same level as a conventional engine does, but leveraging that to "reinterpret" them a bit to perform well across the different devices.

I don't know, I'm not sure. But it got me thinking. And that was a good sign. Was it the right choice? Ask me in a year or so...

30 March, 2019

An unbiased look at real-time raytracing

Evaluating technology without hype or hate...

That would have been the title of my blog post if I published the version I had prepared after the DXR/RTX technology finally became public last year, at GDC 2018.
But alas I didn't. It remained in my drafts folder. Siggraph came and went. Now another GDC, and I finally decided to can that and rewrite it.

Why? Not because I thought the topic wasn't interesting. Hype is easy to give in to. Fear of missing out, excitement about new toys to play with, tech for tech's sake... Hate is equally devious. Fear of change. Comfort zones, familiarity.

These are all very interesting things to think about. And you should. But can I claim I am an expert on this? I don't know, I am not a venture capitalist, and I could say I've been right a number of times, but I doubt that reaches the threshold of statistical significance.
Moreover, being old and grumpy and betting against new technologies is often an easy win. Innovation is hard!

And really, it doesn't matter much. This technology is already in the hardware, and it will stay for the future. It is backed by large companies, and more will come on board for sure. And yes, it could go the way of geometry shaders and other things that tried to work "against" the established GPU architectures, but even for these, we did spend some time to understand how they could help us...

So, let's just assume we want to do some R&D in this RTRT thing and let's ask a different question. What should we be looking for?

The do and do not list of RTRT research.

DO NOT - Think that RTRT will make things simpler, or that (technical) simplicity is an objective. In real-time rendering, the pain comes from within. There's nothing that will stop people spending one month to save 0.1ms in any renderer. 
Until power is out of the equation, we will always build complex systems to achieve the ultimate quality vs performance tradeoffs. When people say that shadow maps are hard for example, they mostly mean that fast shadow maps are hard. Nobody prevents us from rendering huge, high precision maps with high-quality filtering. Even rendering from multiple light samples and doing proper area lights. We don't do it, because of performance optimization. 
And that's true for all complexity in a real-time renderer. When we add raytracing to the mix we only sign for more pain, hybrid algorithms, code paths, caching schemes and so on. And that's ok. Programmer's pain doesn't really matter much in the logistics of the production of today's games.

How many rendering techniques can you see?
How much pain was spent to save fractions of ms on each?

DO
- Think about ray/memory/shading coherency and the GPU implications of raytracing. In other words, optimization. Right now, on high-end hardware, we can probably throw a few relatively naive raytracing effects and they will work because these GPUs are much more powerful than the consoles that constrain the scene and rendering complexity of most AAA games. They can render these scenes at obscene framerates and resolutions. So it might seem not a huge price to pay to drop back to 1080p and 60hz in order to have nicer effects. But this doesn't mean it's an efficient use of GPU power, and that won't stand long term.
Performance/quality considerations are a great culler of rendering techniques. We need to think about efficient raytracing.

DO NOT - Focus on the "wrong" things. Specular reflections don't matter much. Perceptually they don't! Specular highlights, in general, are a strong indicator of shape and material in objects, but we are not good at spotting errors in the lighting environment that generates them. That's why cubemaps work so well. In fact, even for shiny floors and walls (planar mirrors) with objects near or in contact with them, we are fooled most of the times by relatively simple cheats. We see errors in screen-space reflections only because some times they fail catastrophically, and we're talking there about techniques that take fractions of milliseconds to compute. And reflections with raytracing are both too simple and too complex. Too simple, because they are an easy case of raytracing as rays tend to be very coherent. And too complex, because they require evaluating surface shading, which is hard to do in most engines outside screen-space and is slow as triggering different shaders with real-time raytracing is really not hardware friendly.

Intel's demo: raytraced Wolfenstein (http://www.wolfrt.de/). Circa 2010.

DO
- Think about occlusion on the other hand. It's much more interesting, can be more hardware friendly, definitely is more engine friendly and most importantly it's likely to have a bigger visual impact. Correct shadows from area lights, but also correctly occluding indirect lighting, both specular and diffuse.

DO NOT - Think that denoising will save the day. In the near future, for real-time rendering, it most likely will not. In fact in general denoising (even simple blurring that we sometimes already employ) can lift noise from high frequencies to lower ones, which under animation makes for worse artifacts. 

DO - Invest in caching and temporal accumulation ideas. Beyond screen-space. These will likely be more effective, and useful for a wide variety of effects. Also, do think about finer-grained solutions to launch work / update caches / update on demand. For this, real-time raytracing might help indirectly, because it needs in order to be performant the ability to launch shader work from other shaders. That general ability, if implemented in hardware, and exposed to programmers, could be useful in general, and it's one of the most interesting things to think about when we think of hardware raytracing.

DO NOT - Make the wrong comparisons! RTX on / RTX off tells a lie, because what we can't see with "RTX off" is what the game could look like if we allocated all the power that RTX needs to pushing conventional techniques or even simply more assets. There are a lot of techniques we don't use today because we don't think they are on the right side of the quality/performance equation. We could use them, but we prefer to push more assets instead.
If you want to be persuasive about raytracing, proper comparisons should be made. And proper comparisons should also take into account that rasterization without shading (visibility only) leaves compute units available for other work to be done in parallel. 
RTX hardware isn't free either! It costs chip area, even if you don't use it, but there's nothing we can do about that...

DO NOT - Assume that scene complexity is fixed. This is a corollary of the previous point, but we should always think at the very least, for overall visual impact, if simply pushing more stuff is better than pushing a given particular idea for "shinier" stuff, because scene complexity is far from having "peaked".

Offline rendering might (might!) be essentially complexity-agnostic today.
Real-time, not quite. (frame from Avengers Infinity War)

DO
- Think about cases where raytracing could outperform rasterization at its own game. This is hard, because raytracing likely will always have a quite high cost, both because of the memory traffic that is required to traverse the spatial subdivision structures, and because it uses the compute units, while the rasterizer is a small piece of hardware that can operate in parallel. But, that said, raytracing could win in a couple of ways. 
First, because it's much more fine-grained. For example, refreshing very small areas in a shadow map could perhaps be faster with a raytracer. Another way to say this is that there are certain cases where the number of pixels we need visibility for is much smaller than the number of primitives and vertices we'd need to traverse in a rasterizer. 
The second thing to think about is how raytraced visibility goes wide, using the compute units and thus, the entire GPU. The rasterizer, on the other hand, can often be the bottleneck. And even if in many cases we can overlap other work to keep the GPU busy, that is not true in all cases!

DO - Think about engineering costs if you want the technology to be used now. It's true that programmer's pain doesn't matter. But at the moment RTX covers a tiny slice of the market. Programmers could find their pain in completing more important tasks... Corollary: think about fallback techniques. If we're moving an effect to RTX, how will we render it on GPUs that don't support it? Will it look very different? Will it make authoring more painful? That is something we generally can't afford.
In general, be brutally honest about costs and feasibility of solutions. This is a good rule in general, but it is especially true for an emerging technology. You don't want to burn developers with techniques that look good on paper, but fail to ship.

DO - Establish collaborations. Real-time raytracing is probably not going to sell more copies of a game. And if it's not going to save costs and make authoring more effective, if we're talking about uses in the runtime (an exception could be for uses in the artist tools themselves, e.g. to aid lightmap baking and/or previewing). It currently targets only a small audience, and you'll gain nothing by jumping on this too early. 
So, you probably should not pull your smartest R&D engineers from whatever they're doing to jump on this unless you have some very persuasive outside incentives... If not, you'll likely won't have many people to do raytracing related things. 
Thus, you should probably see if you can leverage collaborations with external research groups...


24 March, 2019

GDC 2019 - Everyday (shallow) ML

Here are the slides for my talk in the GDC 2019 Machine Learning tutorial day. 
Lots of slides, many more than what was shown on stage...



Plus! 

Code for my "nvgPrint", a nanoVG/OpenGL c++ library for super simple real-time, asynchronous plotting in C++.

Grab it till quantities last!


11 March, 2019

Rendering doesn’t matter anymore?

Apologies. I wanted to resist the clickbait title, but I couldn’t find anything much better...

And no, I’m not renouncing my ways as a rendering engineer, I’m not going to work on build systems or anything like that. Nor do I believe that real-time rendering has “peaked” or that our pace and progress in image quality has seen slowdowns. There is still a ton of work to do, and the difference between good and bad graphics can be dramatic...

But what I want to talk about a bit more (I mentioned this in my previous post) is what matters, and how do we decide that. ROI, perhaps an ugly term, but it gets the job done.

From product.

I’ve spent most of my now thirteen-old professional career in videogames working on production teams. A.k.a. making games. And lots of games I’ve helped making, I actually average a game per year, even when I was in production, which is quite unusual I guess.

Now, when you are in production, things are relatively simple. Ok, no, they are everything but. What I mean is that is straightforward... Ok, maybe still not the best description.

You start with some sort of rough plan. Hopefully, the creative persons have ideas, they present them to you, and you start making a sketch. What are the risks, things to experiment first, what are tasks that are more well known.

Unless you are bootstrapping an engine from scratch or doing major tech changes, mostly you’ll be asked for a ton of features, things people want. An unreasonable amount of them. Ludicrous.

So you go on and prioritize, estimate, shuffle things until you have some plan that makes sense. It won’t, but we know that, we start working and as things change, we re-adjust that plan, kicking features off the list and moving thinks up the priority...

So you get a gigantic amount of work to do, you get on the ride and off you go, fighting fires as they happen, course-adjusting and bracing yourself for the landing. For the most part. There are some other skills involved here and there, but mostly it’s about steering this huge ship that has both a ton of momentum and the worst controls ever.

Naturally, there isn’t much time to think about philosophical questions and other bullshit like that. In fact, plenty of times the truth is that you start losing control over the priorities, even.

That neat idea of reshuffling your list becomes more like a rough sort, and you don’t even necessarily have time or energy to understand why people who are asking for things need these things...


Production, on a good day.

If you go around and look at big enough productions, one pattern you will notice is that people start working without knowing the “why” of things. Which leads, no need to say, to quite sub-optimal solutions. But the production beast is an organic one, it’s unclean, it’s made by people and opinions and blood and sweat. Engineering is the art of handling all that and still shipping a great game, and it looks nothing like any idealized version of beauty some programmers might hold dear.

To technology.

Then you move to some cushy job in some central technology department, right? And now you have a problem. You have time, at least, sometimes.

You might want to work on things that help, or have a chance to help, more than a single product. If you do R&D, you will be doing things that have more risks and unknowns. In general, you aren’t so strongly tied to that list of features people are shuffling around day after day. Even when you are doing the only reasonable thing, which is to be attached to a product, you are not that close, you can’t be as you’re not part of the core team.

This is an opportunity because you can have some time and freedom, but also a huge risk because, in the end, the product is all that matters. Being singularly focused on production is not necessarily the best strategy for great products, because that monster swallows and consumes everything, focused on getting “more”, but straying away too much is the road to masturbatory efforts that can be irrelevant at best, dangerous most often.

So, you start thinking of ROI. What should I do? What’s best? You probably have things from multiple teams that could be done, and you have other things that you can persuade teams they should want...

In my case, being a rendering person, the question boils down to, what matters in rendering? How do I estimate how much a thing weights? When you move from “vfx artists want this particle trail thing and you have to do it tomorrow” to look at things with an iota of horizon, how do you decide?

Rendering doesn’t matter...

...like it used to. Once upon a time, rendering made the games. Even more than that, entire genres. Doom, of course, is the obvious example, but there are many. The CD-ROM FMV game era. The hardware sprite and scrolling background fuelled platformers, shooters, and so on.


Chances are your engine won't create the next big videogame genre.

Then that ended, we arrived at a point where we had enough computing hardware that videogame genres are not defined by technology anymore. Perhaps this will change with VR/AR but for now let’s ignore them (they’re not hard to ignore either, these days).

But we still had a period where technology could be product defining. Call of Duty running at 60fps on ps3 and 360, for example, was quite unique, and that technical characteristic was instrumental to the product. Today doing a 60fps title is the norm, to ship at 30 is almost a gutsy move...

Rendering is thus restricted in the narrower field of aesthetics. It’s just... graphics. Sad if you think of that, right?

Well of course not! We have an ace up our sleeves, see. It’s true that technology is not genre-defining anymore, but AAA productions are insanely graphic-intensive. We love our computer graphics, and the amount of people dedicated to their care and feed is enormous. Everything is good again in the universe, rendering engineering reigns supreme.

So this is the first order of attack of the ROI problem. There are lots of things that are measurable in people and hours and dollars. These, pretty much, will automatically win over anything else. Let’s put them in the bucket of “really important stuff”.

By the way, when I say “measurable”, I don’t mean you can measure them or that you will. You most definitely will not! What I mean is that you could think of them and have a strong feeling they relate to said measurable quantities...

Chasing shiny things.

So I said you can bucket things. Things that are required to ship the game first. Things that help people second. Third, you get all your shiny things, which are, incidentally today what you could call graphics R&D. A good part of the stuff I do!

Should we stop doing that? No, of course I will never admit to that, c'mon.

But more seriously, it obviously can’t be that simple. There will never be an end to thing that “help people”, even if the best possible scenario you can still make progress, nothing is ever perfect. So obviously you will reach a point where some rendering effect trumps a tiny pipeline improvement, at least that is a given!

Moreover, though it is not that computer-graphic techniques, even when they are purely visual, do not help content production. We could point at the obvious trend of physically-based rendering, and how that helped (after a lot of growing pains everyone had to go through) to curb the explosion of hacks and ad-hoc controls that we had to create assets before.

But even smaller things can help artists to get more freedom, say even things like antialiasing, for example, might mean that geometry and other sources of discontinuity can be use more leniently, without transforming the frame in an undecipherable mess.

Not only there are diminishing returns for productivity improvements as for any other things, but the split point between features and productivity is often tricky. We definitely do not wait till everything is perfect before pushing more features out, the production monster wants to be fed.

And we shall never, ever discount the gigantic effects of familiarity, the other big scary monster. It is not worth sacrificing everything to it, but we should respect it. To use a technique well, to master it takes a long time. Changing things, even if entirely for the better, with no drawbacks whatsoever, still implies that we need to pay the (often huge) costs of loss of familiarity.

So? How do you decide? How do you measure? Then again. You do not. 

I hope he won’t mind me saying, this is one the paths to enlightenment forced on my by Christer, my former boss. How to put this. He has his tricks, not quite koans... So I learned that when he wasn’t persuaded about the opportunity of something, he would go and ask me to put things in more systematic ways, to try to narrow down that ever elusive “ROI”.

Then one time I think we were even arguing about how he could decide that a given initiative he was supporting would, in the end, be beneficial or the better course compared to another alternative. And he slipped and say that we don’t necessarily have to quantify this ROI thing! Of course, be both immediately caught that, even if we were over the phone he could almost sense my smile, but being the clever man he is, he managed to still be right despite the apparent idiosyncrasy...

The lesson is that we want to keep in mind that ROI thing. Not that we need to necessarily optimize for it and spend too much time chasing it. But we definitely need to keep it in mind, be always scared of the risk of doing irrelevant, or worse, damaging things. Keep ourselves accountable.

It’s the question, not the answer.

You might be excused to have thought that I put the question mark in the title, even if it isn’t in the form of a question, because of my poor English. But no, it was a clever thing you see, I actually went back halfway into writing this, and thought about it, and finally changed the punctuation. Only after deciding I would also write this, and feel so meta-clever. And again and ok, let’s stop this recursive loop...

And if I was really good at this, I could have jumped directly to the point and spared you all the blabbing, but I have time on my hands these days so. You’re welcome.

In the end, it is true that certain games should even chase diminishing returns because that’s what you do when you’re up enough. And it’s totally true that you can’t really quantify ROI anyway, so often times you should just do what you want. If someone really thinks something is important, and it’s not offensively bad, there should be space for that. In other words, because we know we are bad at ROI, we should realize that to chase it we should not chase it all the time (surprisingly, this is even a concept in optimization algorithms, by the way).

But! The questions are interesting.

How important shiny things are? Is there a point when state of the art techniques become so complex that they are unfriendly both to either content or programmers integrating/iterating, so much so that they will be used sub-optimally? And simpler solutions would have been actually better instead?

Think for example of something perfectly physically accurate, that can produce perfect images, but that behaves poorly when the inputs are not exact. This is not even such a wild scenario, you can see plenty of PBR games that would have been most likely best off without copy-and-pasting the GGX formulas, just because they now go nuclear with specular and aliasing...


Bloodborne might not be the pinnacle of RTR, but it is imaginative...
Even more interesting. Is there a point where the attention to graphical perfection actually produces worse graphics? Could it be, for example, that the efforts required to create worlds that are perfect, truly great quality-wise, comes in the way of creating worlds that have the variety, the artistry, the iteration and look that in the end are most often correlated to what people think of great graphics?

Again. In the end, we should remember that we serve the product. Not photorealism, per se, but the product. We do believe that photorealism is a great tool to create games, and I won’t question that. But still we have to remember that photorealism is not the goal, technology per se is useless. It’s the product, that we work for.

And if I had to guess, I'd say in most products today both end-user image quality and in most cases, performance, are bottlenecked by asset production, not the lack of whatever latest cool rendering trick. In particular by:
  • The sheer ability of authoring assets. Quantity / Variety.
  • The ability of iterating on assets. Quality.
  • The complexity of technical issues linked to art assets. Which in practice yield sub-optimal decision. Performance & Quality.
  • And the very fixed granularity of assets and their editing tools, the overall inability of performing large, sweeping art changes. The more an environment is "dressed" (authored) the more it hardens and resist change. Art direction. (and perhaps this also causes an over-reliance on some of the few tools that can do said sweeping changes, namely, post-effects)
N.B. All these are rendering problems! Implementation, research, even hardware innovation. Despite the title, the argument here is not that rendering research in videogames is a waste of time, or beyond diminishing returns. Au contraire! It's more vital than ever, in our times of enormous asset pressure. But we have to think hard about what is useful to the end product. 
To make a stupid example. A very smart system to automatically generate rendering meshes from artist data (LODs, materials, instances etc) is probably orders of magnitude more important than say, a post-effect...