Search this blog

02 June, 2019

The Value of Pixels (presentation slides)


Presented at the bay area game tech meetup, hosted at Roblox offices.
If you want to be notified of future meetups, join here.




15 May, 2019

Seeing the whole Physically-Based picture.

Subtitle: Building our rendering on solidly shaky grounds.

Physically-Based Rendering has won. There is no question about it, after an initial period of reluctance, even artists have been converted and I don't think you can find many rendering systems nowadays, either offline or real-time, that hasn't embraced PBR. And PBR proved itself to even be able to adapt to multiple art styles, outside strict adherence to photorealism.

But, really, how much physics is there in our PBR renderers? Let's have a look.

- Optics and Photometry.


Starting from the top, we have to define our physical framework. Physics are models, made to "fit" reality in order to make predictions. Different models are appropriate for different contexts and problems. For rendering, we work with a framework called "geometrical optics".

In G.O. light is composed of multiple frequencies which are assumed to be independent. Light travels in straight lines (in homogeneous media). It changes direction at changes of media (changes of IOR), where it can be absorbed, reflected or refracted. It travels instantaneously and it follows the path of least time (Fermat's principle). 

Is this a good framework? It's already making a lot of assumptions, and we know it cannot model all light behavior even when it comes to things that are easily visible: diffraction, interference, fluorescence, phosphorescence. But we say that these phenomena are not that common in everyday materials, and we might be right.

That's not all though, even before we start rendering our first triangle, we make more assumptions. First, we define a color space, usually a trichromatic one, because of the visual system metamerism. Fine, but we know that's not correct for rendering. We know spectral rendering has in even sometimes dramatically different results, but we trust our artists can tune lighting, materials, and post-processing in the right way (even if the two things shouldn't be related) to generate nice images even if we restrict ourselves to RGB. Or at least, we hope.

- Scattering


Next, we have to define what happens when the light "hits" something (an IOR discontinuity). Well, who knows, light is really hard! Some electrons... resonate? Get polarized? Please let it not be something to do with quantum stuff... Anyhow, eventually they scatter some energy back... waves? particles? There is some interference at around the atomic level. Who knows, luckily, we have another framework that comes to rescue: microfacet theory.

Surfaces are made of microfacets, like a microscopic landscape, light rays hit, bounce around and eventually come out. If we integrate the behavior of said microfacets over a small area, we can compute a scattering probability (BRDF) from the distribution of the microfacets themselves and a lot of math and voila', rendering happens.

Over a small area? How small, by the way? Well, Naty Hoffman and Eric Heitz say around the order of magnitude of the projected area of a pixel. I say, around the order of magnitude of a light wavelength, and then the projected area thing is antialiasing applied "after". So probably it's the pixel thing that's right.

What are these microfacets made of? Ideal reflectors obeying only the Fresnel law for how much light is reflected and how much refracted. The refracted part gets into the material (for dielectrics, that somehow allow this behavior), scatters some more and eventually comes out. If it comes out still "near enough" we call that "diffuse" reflection.
Otherwise, we call that subsurface scattering. But how does the light scatter inside the material? It hits particles. Microflakes? But microfacet based diffuse models (e.g. Oren-Nayar) simply swap the facets from ideal reflectors to ideal diffusers (Lambert)...

Regardless. We know all these things! We have blog posts, Siggraph talks, and books. Physics... And this still is well in that "geometrical optics" framework. Rays of light hit things. So much so that we can create raytracers to brute-force these microscopic interactions and create our own BRDFs!

But, it is still reasonable to use geometrical optics for these interactions? They seem to be quite... small. Maybe diffraction now matters? It turns out, it does, and it's a well-known thing (if you read the papers from the sixties... Beckmann-Spizzichino), but we sweep it under the rug.

And well, we can't really derive the equations from the microfacets, that integral is itself hard, so the BRDFs that we use introduce, typically, a bunch of other assumptions. For example, they might disregard the fact that light can bounce around multiple times before "coming out".

But who cares, nice pictures can be generated with this theory, and that's what matters. Moreover, someone did try to fit the resulting equations to real-world materials, right? The MERL database? I wonder how much error there is in that. Or how much it samples "well" real-world materials. Or how perceptual is the error metric used in estimating the error... Better to not think too much.

- Fiat Lux!


Are we done now? Far from it! In practice, we cannot just use the BRDF and brute-force light rays, not for real-time rendering, we're not Arnold. We need to compute a few more integrals!

We need to integrate over the light source, and over the surface area that is "seen" by the pixel, we're considering (pixel footprint). And that is incredibly hard, so hard we don't even try before having introduced a bunch more assumptions and approximations.

First of all, when we talk about pixel footprint, we really mean that we consider some statistics of the surface normals. We don't consider the fact that, for example, the "view rays" themselves change (and the light ones too), or that the surface normals don't really exist as an entity separate from actual surface geometry (which would cause shadowing and all other fun things). We assume these effects to be small.

Then, when we talk about light, we mostly mean simple geometric shapes that emit light from their surface. For example, a sphere. At each point, the light is emitted equally in all directions, and most often, it's also emitted with the same intensity over the surface.

And even then it's not enough to compute everything in closed-form. In fact, the more complex the light is, typically, the more approximated the BRDF will become. And then we'll fit these approximated BRDFs to the "real" one, and sum everything up. And sprinkle some of that pixel footprint thing on top somehow as well, but really that's done once on the "real" BRDF, even if we never actually use that!

So we have an approximation for very small lights, and maybe a good one for spheres, one for lines and capsules with some more handwaving, even more for polygonal lights, especially if textured and lastly one for far, "environment" light... We have approximations for "diffuse" and for "specular", for each of these. And maybe for static versus dynamic objects? A lot of math and different approximations.

We compare them and make sure that more-or-less the material looks the same under different kinds of light, and call it a day... The most ambitious of us might even export a scene to a path-tracer and compute some sort of ground-truth, to visually make sure things are at least reasonable...

- We're done, right?


So... we get our final image. In all its Physically Based, 60fps, HDR glory! Spectacular.

Year after year people come up with better equations, tighter approximations, and we make shinier pixels as a result.

Is that all? Of course not! We are just getting started! 

In practice, materials are not just one surface... They can have layers! And they are never optically uniform! They sparkle! They are anisotropic, they have scratches. Really, look around, look at most things. Most things are sparkly and anisotropic, due to the way they are fabricated.

And nothing is a surface, really. It's mostly volume and particles. Even... air! So we need fog and volumetric models. But that's not just about the light that scatters in the air back to our virtual cameras, we should also consider how this scattering affects lighting at surfaces. Our rays of light are not that straight anymore! Participating volumes make our light sources more "diffuse". Bigger. All of them, also things like environment lighting! And... that should affect shadows too right?

And now that we think about shadows... all this complexity and unknowns are still only for what we call "direct" lighting! What about global illumination? What about the million other hacks and assumptions that we rely upon to render each or our frames?

- Conclusions

So. How much physics is there in a frame, really? And more importantly, what's the point of all this? Should we be ashamed of not knowing physics that well? Should we do physics more? Less?



I don't know. I personally do not know physics well and I'm not too ashamed. A lot of what we've been doing is reasonable, really. We went with GGX because its "tail" helps. All the lighting improvements served our products. All the assumptions, individually, looked reasonable.

But, there is a value I think in looking at our math and our approximations holistically, now that we are getting so good at photorealism.

Perhaps there is not too much value for example in going off the deep end of complexity when we think of BRDFs, if we can't then integrate them with complex lighting, or, in order to do so, we have to approximate them again.

Similarly, the features we focus on should be evaluated. Is it more important to have non-uniform emission in our light sources, or a different "tail" in GGX/T-R? Anisotropic surfaces or sparkles? Spectral sampling? Thin-film? Non-lambertian diffuse? Of which kind? Accurate energy conservation or multi-bounce in microfacets?

Is it better to use the best possible approximation for a given integral, even if we end up with many different ones, or should we just use a bunch of spherical gaussians, or LTCs and such, but keep the same representation everywhere? And in general, is most of our error still in the materials, or in the lights? This is very hard to tell from just looking at artist-made pictures, because artists will compensate any way they can!

But even more importantly - How much can we keep relying on simplifying assumptions in order to make our math work?

I suspect to answer these questions we'll need more data. Acquire from the real world. Brute force solutions. Then look at the data and understand what matters, what matters perceptually, what errors are we committing end-to-end, and what we should approximate better and how...

And we should not assume that because somewhere we have a bit of physics, we are doing things correctly. We are, after all, a field that forgot for decades basic things like color spaces and gamma.

NumPy by Example

This originally was in my Scientific Python 101 article, I've split it now as it was a long article and sometimes I need just to have a look at this code as a reminder of how things work.
If you're interested in a similar "by example" introduction for Mathematica, check this other one out.

Execute each section in an IPython cell (copy and paste, then shift-enter)

# Set-up the environment:
# First, we'll have to import NumPy and SciPy packages.
# IPython has "magics" (macros) to help common tasks like this:
%pylab
# This will avoid scientific notation when printing, see also %precision
np.set_printoptions(suppress = True) 
# Optional: change the way division works:
#from __future__ import division 
# 1/2 = 0.5 instead of integer division... problem is that then 2/2 would be float as well if we enable this, so, beware.

# In IPython notebook, cell execution is shift-enter
# In IPython-QT enter evaluates and control-enter is used for multiline input
# Tab is used for completion, shift-tab after a function name shows its help
# Note that IPython will display the output of the LAST -expression- in a cell
# but variable assignments in Python are NOT expressions, so they will
# suppress output, unlike Matlab or Mathematica. You will need to add print
# statements after the assignments in the following examples to see the output
data = [[1,2],[3,4],[5,6]]
# if you want to suppress output in the rare cases it's generated, use ;
data; # try without the ; to see the difference

# There is a magic for help as well
%quickref
# Also, you can use ? and ?? for details on a symbol
get_ipython().magic(u'pinfo %pylab')

# In addition to the Python built-in functions help() and dir(symbol)
# Other important magics are: %reset, %reset_selective, %whos
# %prun (profile), %pdb (debug), %run, %edit

1*3 # evaluate this in a separate cell...

_*3 # ...now _ refers to the output of last evaluated cell
# again, not that useful because you can't refer to the previous expression
# evaluated inside a cell, just to the previous evaluation of an entire cell

# A numpy array can be created from a homogeneous list-like object
arr = np.array(data)
# Or an uninitialized one can be created by specifying its shape
arr = np.ndarray((3,3))
# There are also many other "constructors"
arr = np.identity(5)
arr = np.zeros((4,5))
arr = np.ones((4,))
arr = np.linspace(2,3) # see also arange and logspace
arr = np.random.random((4,4))

# Arrays can also be created from functions
arr = np.fromfunction((lambda x: x*x), shape = (10,))
# ...or by parsing a string
arr = np.fromstring('1, 2', dtype=int, sep=',')

# Arrays are assigned by reference
arr2 = arr
# To create a copy, use copy
arr2 = arr.copy()

# An array shape is a descriptor, it can be changed with no copy
arr = np.zeros((4,4))
arr = arr.reshape((2,8)) # returns the same data in a new view
# numpy also supports matrices, which are arrays contrained to be 2d
mat = np.asmatrix(arr)
# Not all operations avoid copies, flatten creates a copy
arr.flatten()
# While ravel doesn't
arr = np.zeros((4,4))
arr.ravel()

# By default numpy arrays are created in C order (row-major), but 
# you can arrange them in fortran order as well on creation,
# or make a fortran-order copy of an existing array
arr = np.asfortranarray(arr)
# Data is always contiguously packed in memory

# Arrays can be indexed as with python lists/tuples
arr = np.zeros((4,4))
arr[0][0]
# Or with a multidimensional index
arr[0,0]
# Negative indices start from the end
arr[-1,-1] # same as arr[3,3] for this array

# Or, like matlab, via splicing of a range: start:end
arr = arange(10)
arr[1:3] # elements 1,2
arr[:3] # 0,1,2
arr[5:] # 5,6,7,8,9
arr[5:-1] # 5,6,7,8
arr[0:4:2] # step 2: 0,3
arr = arr.reshape(2,5)
arr[0,:] # first row
arr[:,0] # first column

# Also, splicing works on a list of indices (see also choose)
arr = arr.reshape(10)
arr[[1,3,5]]

# and with a numpy array of bools (see also where)
arr=np.array([1,2,3])
arr2=np.array([0,3,2])
arr[arr > arr2]

# flat returns an 1D-iterator
arr = arr.reshape(2,5)
arr.flat[3] # same as arr.reshape(arr.size)[3]

# Core operations on arrays are "ufunc"tions, element-wise
# vectorized operations
arr = arange(0,5)
arr2 = arange(5,10)
arr + arr2

# Operations on arrays of different shapes follow "broadcasting rules"
arr = np.array([[0,1],[2,3]]) # shape (2,2)
arr2 = np.array([1,1]) # shape (1,)
# If we do arr+arr2 arr2.ndim
# an input can be used if shape in all dimensions sizes, or if 
# the dimensions that don't match have size 1 in its shape
arr + arr2 # arr2 added to each row of arr!

# Broadcasting also works for assignment:
arr[...] = arr2 # [[1,1],[1,1]] note the [...] to access all contents
arr2 = arr # without [...] we just say arr2 refers to the same object as arr
arr[1,:] = 0 # This now is [[1,1],[0,0]]
# flat can be used with broadcasting too
arr.flat = 3 # [[3,3],[3,3]]
arr.flat[[1,3]] = 2 # [[3,2],[2,3]]

# broadcast "previews" broadcasting results
np.broadcast(np.array([1,2]), np.array([[1,2],[3,4]])).shape

# It's possible to manually add ones in the shape using newaxis
# See also: expand_dims
print(arr[np.newaxis(),:,:].shape) # (1,2,2)
print(arr[:,np.newaxis(),:].shape) # (2,1,2)

# There are many ways to generate list of indices as well
arr = arange(5) # 0,1,2,3,4
arr[np.nonzero(arr)] += 2 # 0,3,4,5,6
arr = np.identity(3)
arr[np.diag_indices(3)] = 0 # diagonal elements, same as np.diag(arg.shape[0])
arr[np.tril_indices(3)] = 1 # lower triangle elements
arr[np.unravel_index(5,(3,3))] = 2 # returns an index given the flattened index and a shape

# Iteration over arrays can be done with for loops and indices
# Oterating over single elements is of course slower than native
# numpy operators. Prefer vector operations with splicing and masking
# Cython, Numba, Weave or Numexpr can be used when performance matters.
arr = np.arange(10)
for idx in range(arr.size):
    print idx
# For multidimensiona arrays there are indices and iterators
arr = np.identity(3)
for idx in np.ndindex(arr.shape):
    print arr[idx]
for idx, val in np.ndenumerate(arr):
    print idx, val # assigning to val won't change arr
for val in arr.flat:
    print val # assigning to val won't change arr
for val in np.nditer(arr):
    print val # same as before
for val in np.nditer(arr, op_flags=['readwrite']):
    val[...] += 1 # this changes arr

# Vector and Matrix multiplication are done with dot
arr = np.array([1,2,3])
arr2 = np.identity(3)
np.dot(arr2, arr)
# Or by using the matrix object, note that 1d vectors
# are interpreted as rows when "seen" by a matrix object
np.asmatrix(arr2) * np.asmatrix(arr).transpose()

# Comparisons are also element-wise and generate masks
arr2 = np.array([2,0,0])
print (arr2 > arr)
# Branching can be done checking predicates for any true or all true
if (arr2 > arr).any():
    print "at least one greater"

# Mapping a function over an array should NOT be done w/comprehensions
arr = np.arange(5)
[x*2 for x in arr] # this will return a list, not an array
# Instead use apply_along_axis, axis 0 is rows
np.apply_along_axis((lambda x: x*2), 0, arr)
# apply_along_axis is equivalent to a python loop, for simple expressions like the above, it's much slower than broadcasting
# It's also possible to vectorize python functions, but they won't execute faster
def test(x):
    return x*2
testV = np.vectorize(test)
testV(arr)

# Scipy adds a wealth of numerical analysis functions, it's simple
# so I won't write about it.
# Matplotlib (replicates Matlab plotting) is worth having a quick look.
# IPython supports matplotlib integration and can display plots inline
%matplotlib inline
# Unfortunately if you chose the inline backend you will lose the ability
# of interacting with the plots (zooming, panning...)
# Use a separate backend like %matplotlib qt if interaction is needed

# Simple plots are simple
test = np.linspace(0, 2*pi)
plt.plot(test, np.sin(test)) # %pylab imports matplotib too
plt.show() # IPython will show automatically a plot in a cell, show() starts a new plot
plt.plot(test, np.cos(test)) # a separate plot
plt.plot(test, test) # this will be part of the cos plot
#plt.show()

# Multiple plots can also be done directly with a single plot statement
test = np.arange(0., 5., 0.1)
# Notes that ** is exponentiation. 
# Styles in strings: red squares and blue triangles
plt.plot(test, test**2, 'rs', test, test**3, 'b^')
plt.show()
# It's also possible to do multiple plots in a grid with subplot
plt.subplot(211)
plt.plot(test, np.sin(test))
plt.subplot(212)
plt.plot(test, np.cos(test), 'r--')
#plt.show()

# Matplotlib plots use a hierarchy of objects you can edit to
# craft the final image. There are also global objects that are
# used if you don't specify any. This is the same as before
# using separate objects instead of the global ones
fig = plt.figure()
axes1 = fig.add_subplot(211)
plt.plot(test, np.sin(test), axes = axes1)
axes2 = fig.add_subplot(212)
plt.plot(test, np.cos(test), 'r--', axes = axes2)
#fig.show()

# All components that do rendering are called artists
# Figure and Axes are container artists
# Line2D, Rectangle, Text, AxesImage and so on are primitive artists
# Top-level commands like plot generate primitives to create a graphic

# Matplotlib is extensible via toolkits. Toolkits are very important.
# For example mplot3d is a toolkit that enables 3d drawing:
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
points = np.random.random((100,3))
# note that scatter wants separate x,y,z arrays
ax.scatter(points[:,0], points[:,1], points[:,2])
#fig.show()

# Matplotlib is quite extensive (and ugly) and can do much more
# It can do animations, but it's very painful, by changing the
# data artists use after having generated them. Not advised.
# What I end up doing is to generate a sequence on PNG from
# matplotlib and play them in sequence. In general MPL is very slow.

# If you want to just plot functions, instead of discrete data,
# sympy plots are a usable alternative to manual sampling:
from sympy import symbols as sym
from sympy import plotting as splt
from sympy import sin
x = sym('x')
splt.plot(sin(x), (x, 0, 2*pi))

28 April, 2019

On the “toxicity” of videogame production.

I was at a lovely dinner yesterday with some ex-gamedev friends and, unsurprisingly, we ended up talking about the past and the future, our experiences in the trenches of videogame production. It reminded me of many discussions I had on various social media channels, and I thought it would be nice to put something in writing. I hope it might help people who want to start their career in this creative industry. And perhaps even some veterans could find something interesting in reading this.

- Disclaimer.

These are my thoughts. Duh, right? Obvious, the usual canned text about not representing the views of our corporate overlords and such? Not the point.
The thing I want to remind you before we start is how unknowable an industry is. Or even a company, a team. We live in bubbles, even the ones among us with the most experience, with most curiosity, are bound by our human limits. That’s why we structure large companies and teams in hierarchies, right? Because nobody can see everything. Of course, as you ascend them you get more of a broad view, but from these heights, the details are quite blurry, and vice-versa, people at the “bottom” can be very aware of certain details but miss the whole. 

This is bad enough that even if internally you try hard, after a success or a failure, to understand what went right or wrong, most of the times you won’t capture objectively and exhaustively these factors. Often times we don’t know at all, and we fail to replicate success or to avoid failing again.

Staring at the production monster might drive you insane.

So, I can claim to be more experienced than some, less than some others, it truly doesn’t matter. Nobody is a source of truth in this, the best we can do is to bring a piece of the puzzle. This is, by the way, a good attitude both towards oneself, to know that we probably have myriads of blind spots, but also key to understand what other people say and write. Even the best investigative journalists out there can at best report a bit of truth, an honest point of view, not the whole picture. 

To name names, for example, think about Jason Schreier, whom I admire (have you read “blood, sweat and pixels”? You should...) for his writing and his ability to do great, honest research. His work is exemplary, and still, I think it’s partial. In some cases, I know it is.

And that is ok, it’s intellectual laziness to think we can read some account and form strong opinions, know what we’re talking about. Journalism should provide a starting point for discussion, research, and thought. It’s like doing science. You chip away at the truth, but one single observation, no matter the prestige of the lab, means very little. 
And if we need multiple studies to confirm something as simple as science, where things are objective, measurable and unchanging, think how hard is the truth when it comes to entities made of people…

- Hedging risk.

One thing to understand is where the risk for abuse comes from. And I write this first not because it should be a personal responsibility to avoid abuse, but because it’s something that we don’t talk about. Yes, there is bad management, terrible things do exist, in this industry as in others, and they have to be exposed, and we have to fight. But that doesn’t help us to plan our careers and to take care of ourselves. 

So, where does the potential for abuse come from? Simply, imbalance of power. If you don’t have options, you are at risk, and in practice, the worst companies tend to be the ones with all the power, simply because it’s so simple to “slip” into abusing it. Sometimes without even truly realizing what the issue is.

So, you should avoid EA or Activision. Nintendo, Microsoft and Sony, right, the big ones? No, that’s not the power I’m talking about, quite the opposite. Say you are an established computer engineer working for EA, in its main campus in the silicon valley, today. Who has power, EA or you, when Google, Facebook et al are more than eager to offer you a job? I’d say, as an educated guess, that the most risk comes in medium-sized companies located in countries without a big game industry, in roles where the offer is much bigger than the demand. 

Does that mean that you should not seek a career in these roles, or seek a job in such companies? Definitely not, I started exactly like that, actually leaving a safer and even better-paid job to put myself in the above-mentioned scenario. It’s not that we shouldn’t do scary and dangerous things, but we have to be aware of what we are doing and why. My better half is an actress, she’s great and I admire her ambition, work ethic, and courage. Taking risks is fine when you understand them, you make conscious choices, you have a plan, and that plan should also include a path to stability.

- Bad management or creative management?

Fact. Most great games are done in stressful conditions. Crunch, fear, failure, generally the entire thing being on fire. In fact, the production of most great games can be virtually indistinguishable from the production of terrible games, and it’s the main reason why I advise against choosing your employer only based on your love of the end product.

This I think is remarkable. And often times we are truly schizophrenic with our judgment and outrage. If a product fails, we might investigate the reasons for its failure and find some underlying problems in a company’s work conditions. Great! But at the same time, when products truly succeed we have the ability to look at the very same patterns and not just turn a blind eye to them, but actively celebrate them. 
The heroic story of the team that didn’t know how to ship, but pulled all-nighters, rewrote the key system and created the thing that everyone remembers to this day. If we were to look at the top N games of all time, how many would have these stories behind their productions?

Worse, this is not just about companies and corporations. Huge entities, shareholders, due dates and market pressure. It happens pretty much universally, from individual artists creating games with the sole purpose of expressing their ideas to indie studios trying to make rent, all the way to Hollywood-sized blockbuster productions. It happened yesterday, it happens today. Will it happen in the future? Should it?

- The cost of creativity.

One other thing to realize is how this is not a problem of videogame production, at all. Videogames don’t have a problem. Creative products do. Look at movies, at actors, film crews. Visual effects. Music? Theater? Visual arts? Would you really be surprised to learn there are exactly the same patterns in all these? That videogames are not the “worst” industry among the creative ones? I’m guessing you would not be surprised…

This is the thing we should really be thinking about. Nobody knows how to make great creative products. There is no recipe for fun, there is no way put innovation on a predictable schedule, there’s no telling how many takes will be needed to nail that scene in a movie, and so on. This is truly a hard problem, fundamentally hard, and not a problem we can solve. By definition, creativity, research, innovation, all these things are unknown, if we knew how to do them up-front, they would not be novel and creative. They are defined by their lack of predictability.

In keeping with movie references...

And I don’t know if we know where we stand, truly. It’s a dilemma. On one hand, we want to care, as we should, about the wellbeing of everyone. We might even go as far as saying that if you are an artist, you shouldn’t sacrifice yourself to your art. But should you not? Should it be your choice, your life, and legacy? Probably. 
But then we might say, it’s ok for the individual, but it’s not ok for a corporation to exploit and use artists for profit. When we create packaged products, we put creativity in a corporate box, it’s now the responsibility of the corporation to ensure the wellbeing of the employees, they should rise to higher standards. And that is absolutely true I would never question such fact.

Yet, our schizophrenia is still there. It’s not that simple, for example, we might like a given team that does certain products. And we might be worried when such a team is acquired by a large corporation because they might lose their edge, their way of doing things. You see the contradiction in that?

In general (in a very, very general sense), large corporations are better, because they are ruled by money, investors looking at percentages, often banks and other institutions that don’t really know nor care about the products. And money is fairly risk-averse, it makes big publishers cash on sequels, big franchises, incremental improvements and so on. All things that bring more management, that sacrifice creativity for predictability. Yet we don’t really celebrate such things, do we? We celebrate the risk takers, the crazy ones…

- Not an absolution.

So tl;dr; creativity has a cost in all fields, it’s probably something we can’t solve, and we should understand our own willingness to take risks, our own objectives and paths in life. Our options exist on a wide spectrum, if you can you should probably expose yourself to lots of different things and see what works best for you. And what works best will change as your life changes as well.

But this doesn’t mean that shitty management doesn’t exist. That there aren’t better and worse ways of handling risks and creativity, that there is no science and no merit. Au contraire. And ours, being a relatively new industry in many ways, certainly the youngest among the big creative industries, still has a lot to learn, a lot to discuss. I think everyone who has a good amount of production experience has seen some amount of incompetence. And has seen or knows of truly bad situations, instances of abuse and evil, as I fear will always be the case when things involve people, in general.

It’s our responsibility to speak up, to fight, to think and debate. But it’s also our responsibility to not fall into easy narratives, oversimplifications, to think that it’s easy to separate good and bad, to identify from the outside and at a glance. Because it truly isn’t and we might end up doing more harm than help, as ignorance often does.

And yes.
These are only my 2c.