Search this blog

Showing posts with label Rants. Show all posts
Showing posts with label Rants. Show all posts

26 June, 2022

Machines Arose

The era of algorithmic slavery.

When we think of the rise of the machines, we picture skynet and the matrix. Humanity literally fighting the AI, with big pew-pew guns, and getting enslaved by it. Heroes seeing through the deception, illuminated minds, perhaps looking insane to the average bystander, purposed with a higher calling.

We lose ourselves in the bombast of Hollywood, we take metaphors literally, we fear or dream of the singularity, look for signs of consciousness in the code we write.

A lesser known game from Bethesda... 2029 is close!

In reality, the danger is the opposite. It’s not how much consciousness the machines gain. It is rather, how much they remove from us. Yes, the recent "LaMDA is sentient" BS is not much more than a bad publicity stunt - but that doesn't mean that Google is not scary!

We are of course already dependent on machines - that is not the problem - our degree of attachment to them. We are dependent on all technology we create. It’s the defining feature of humanity to better itself through technology, it has been true since we made fire.

For millennia we have used technology to elevate ourselves, to free us from the minutiae of living and sublimate our spirit, enabling higher forms of creativity, allowing us to dedicate more time to work that is intellectual in nature.

You can call this productivity, even augmented intelligence - once we discovered that technology is not good simply to ease physical labor, but can be shaped into tools for better thinking.

Sougwen Chung (愫君) - Machines can be tools that augment our creativity. 

Is this trend going to end one day? Is is already ending?

Will we live in a world where it’s increasingly hard to be a value-add via the use of technology, but rather most of us will be made irrelevant by it? What happens to the masses that can’t produce anything of interest?

Can our creativity outpace the machine’s forever? 

https://www.youtube.com/watch?v=g9Z0pqsCUhY

One can argue that a tool remains a tool, and in the history of the world, short-sighted people always lamented when creation became more accessible, from painting to film photography, from film to digital cameras, from cameras to smartphones. 

There is always someone lamenting the loss of "true" art - and they are always wrong... but! At the same time, we have enough historical evidence of machines displacing jobs, labor having to learn new skills, often painfully, for the generations caught in the transition. 

There is some reason to worry, then - but it's not the key to the story here. Creativity is likely to remain firmly in the domain of humans, in fact one could say that a truly creative machine would need to be a conscious one, and that is not the scenario I'm interesting in.

The danger is subtler, closer and more real. 

Do we already live in a world where we many creators are replaceable slaves, being milked for content by algorithms that are the true holders of value?


AIs feed us during most of our days. Shodan's tools are videos of kittens, dogs and babies. And her minions are willingly joining, hoping for visibility and connection. 

It's a marvelous machine that exploits the brain chemistry of consumers with cheap dopamine, and of creators, as we seek to show our photos and videos for follows, we increasingly define our value in society by the number of likes we get.

How conscious are we, when most of our connections are software mediated, and sentiment analyzed? The algorithm does not know when to stop, and neither do our brains. Dopamine is the AI’s sugar.

We do not need to be intubated, in pods, to be enslaved. We don't even need to be slaves, once we created a system that gives some short-term pleasure, we willingly subjugate to it.

Don’t take your science fiction literally.

I don't fear the sentient AI and the singularity. I don't care much about privacy and crypto-anarchism. I think we are looking at the wrong problems. Even the worries about physicals changes in our cognitive abilities, psychology and looks might be overstated - as we are very plastic, we adapt.

And for how despicable the role of simplistic recommendation algorithms, shares and likes have on creating information bubbles and drive polarization, we are beginning to understand and rebel - systems might be tuned differently...

The existence of a system though, per se, and the fact that can be tuned - is that ever possibly moral? Are we not saying that we are losing agency, if the way a machine operates controls society?

This is a Silicon Valley problem that SV cannot solve for itself. It's the natural evolution of companies to want to be successful, and we are in a world where success means engaging billions of people, capturing a large percent of their time and attention.

These systems can hardly be called tools, and are clearly not in our control.

02 February, 2022

WTF is the Metaverse?!

Disclaimer! Yes, I work at Roblox. It's been a decade or so since I could pretend this space to be anonymous, and many years ago I made it clear that c0de517e/deadc0de = Angelo Pesce. And yes, my work makes me think about what this "metaverse" thing is more than the average person on the street (Roblox has been a metaverse company long, long before it was "cool"). I guess like an engineer at google might think about "the internet" more than the average person... But the following truly is not about what we are building at Roblox, which is something quite specific - these are my opinions, and other people might agree to some degree, and disagree with them.

I don't like hype cycles.

It is somewhat frustrating to see how supposedly experienced and rational people jump on the latest shiny bandwagon. At the same time, I guess it's comfortingly human. But that's a topic for another time...

Thing is, the metaverse is undoubtedly "hot" right now, so hot that every company, regardless of what they do, wants to have a claim to it. Mostly harmless, even cute, and for some, validating years of effort pushing these ideas... But, at the same time, it dilutes the concept, it makes words mean little to nothing when you can slap them onto any product.

So, let's give it a try and think really what is the metaverse, and how, if at all, is different from what we have today.

In the most general sense, "the metaverse" evokes ideas of synthetic, alternative places for social interactions, entertainment, perhaps even work... living our lives.

And let's set aside the possible dystopian scenarios - not the point of this, albeit, these are always important to seriously consider, while also reminding ourselves that they are levied against most society-affecting technology, from the printing press onwards.

This definition is just plain... boring!

It's boring because we have always been doing that, at least, since we had the ability to connect computers together. We are social animals, obviously, we want to imagine any new technology in a social space. BBS are alternative places for social interaction. And entertainment. And work. And from there on we had all kinds of shared virtual worlds, from IRC to the Mii Channel, from MUDs to World of Warcraft, from Club Penguin to Second Life, and so on. 

LucasFilm's Habitat. Now live!

The entire internet fits the bill, through that lens, and we don't need a new word for old ideas - outside marketing perhaps.

So, let's try to find some true meaning for this word. What's new now? Is it VR/AR/XR perhaps? Web 3.0 and NFTs? The "fediverse"?

Or perhaps there is nothing new really, but we just run out of ideas, explored the space of conventional social media startups already, and now trying to see if some old concept can be successful, throw a few things at the wall and see what sticks...

My thesis? Agency.

Agency is the real differentiating factor. 

Really, it's right there, staring at us. Like a high school kid facing an essay, sometimes it's good to look at the word itself, what does the dictionary tell us? Yes, we're going there: "In its most basic use, meta- describes a subject in a way that transcends its original limits, considering the subject itself as an object of reflection".

If you're controlling your virtual, alternative, synthetic universe, you are creating something that might be spectacular, engaging, entertaining, powerful... but it's not a metaverse. 

Videogames are not the metaverse, not even MMORPGs... Sandboxes/UGC/modding is not the metaverse. Virtual worlds are not the metaverse! 

Yes, I'm "disqualifying" Minecraft, Second Life, Gather.Town, GTA 5, Decentraland, Skyrim, Fortnite, Eve Online, the lot - not because of the quality of these products, but because we don't need new words for existing concepts, we really don't... 

Obviously, the line is somewhat blurry, but if you're making most of the rules you are "just" creating a world, with varying degrees of freedom.

A metaverse is an alternative living space (universe... world...) that is mostly owned by the participants, not centrally directed. Users create, share creations and make all of the rules (the meta- part).

Why does this distinction matter? Why is it interesting? 

At a shallow level, obviously, it gives you more variety, than a single virtual world. It has all the interesting implications of any platform where you do not control content. You are not really asking people to enter your world or use your product, you are really there to provide a service for others to create what they want to create and market it, form communities, and engage with them...

But I think it's more than that. This extra agency works to create a qualitatively different community, one that is centered around the creation and sharing of creations, an economy you might call it. Something quite different from passive consumption or social co-experience.

Ironically, through this lens, most of Web 3.0 "gets is wrong", focusing on decentralizing a transaction ledger of virtual ownership, but making that ownership be simply parts of strictly controlled virtual universes. You own a certificate to a plot of digital land that someone else created and controls.

Regardless of the fact that you only own the certificate, and not the actual land, which can disappear at any moment... these kinds of worlds seem at best a coat of paint over very old and limited concepts.

To me, even outside the blockchain, the entire notion of centralized versus decentralized systems, proprietary, closed versus interoperable open standards, all these concepts are really a "how", not a "what", they might be appropriate choices for a given product at a given time, but they should never be what the product "is".

Without wanting to sell the metaverse as the future, I personally think that these "fake" or "weak" metaverses, together with the current hype, are what pushes people away from something that could be truly interesting.

Note also that nothing of this idea of social creativity, giving a platform for people to create and share in others' creations, has to do with new technologies. 

You don't need VR for any of this. You don't need hand tracking, machine learning and 3d scanning, you don't even need 3d rendering at all! 

These are all tools that might or might not be appropriate, but you could have perfectly great metaverses that are text only if you wanted to (remember MUDs? add the "meta" part...). And at the same time, just because you have some cool 3d technology, it does not mean you have something for the metaverse...

E.g. you could have a server hosting community-created ROMs for a Commodore 64, add built-in networking to allow the ROMS to be about co-experience, add a pinch of persistence to allow people to express themselves, and you'd have a perfectly great, exciting metaverse... Or you could take something like UXN and the vision of permacomputing as the foundation, to reference something more contemporary...

BBS Door Games - more proto-metaverse-y than most of today's virtual worlds.

In summary, these are to me the key attributes of this metaverse idea:

  1. Inherently Social and interactive - as we are social animals and we want to inhabit spaces that allow socialization. This mostly means real-time networking, allowing users to connect, create and experience together.
  2. User-Created: participants have full agency over the worlds. Otherwise, you're just making a conventional virtual world. This is the "meta" part, you should not have control over the worlds, users should be able to take pieces of the universe and shape it, or completely subvert everything, own their creations. 
    • Litmus test: if your users are "playing X", then X is not a metaverse. If they are playing X in Y, then Y might be a metaverse :)
  3. Must have Shareable Persistence. Users should be able, in-universe, to store and share what they create - creating an economy, connecting worlds and people. And at the very least, the world must allow for a persistent, shared representation of self (Avatars). Otherwise, you're only making a piece of middleware, a game engine.

It's a social spin over the old, OG hacker's ethos of tinkering, creating with computers, owning their creations and sharing them. It has nothing to do with the particular implementation and it is not even about laws, copyright, or politics. It's a community that creates together, makes its own rules, and has full agency over these virtual creations. 

One more thing? In a truly creator-centric economy, you don't need to base all your revenue on ads, and the dark patterns they create.

Perhaps to shape that future it's more useful to revisit old, lost ideas, than thinking about shiny new overhyped toys. More SmallTalk's idea of Personal Computing and Plan 9, less NFTs and XR...

01 December, 2019

Is true hacking dead? What we lost.

I don't know how consciously or not, but now that I moved to San Mateo, I found myself listening to many audiobooks about the history of computing, videogames and the Silicon Valley, from the Jobs biography to the "classic" Hackers by Steven Levy, from "Console Wars" to "Bad Blood".
All of these I've been enjoying, even if some need to be taken with more of a grain of salt than others, and from most I've gained one or two interesting perspectives.

Hackers, in particular, struck some chords that are dear to me. Besides the history and the various personalities, some of which I didn't know of, one thing resonated: the hands-on, pragmatic, a-political nature of early hacking.

And no, before we keep going, I don't mean that we should not be political in our actions, today. We are social animals and we should care about society and politics, in fact, it would seem to me that the only reason, at least if one is to take the book at its word, why early hacking was a-political is because hackers were fairly despicable a-social people.

But, it is interesting, because one could make the case that nowadays we live in a world where ideologies trump pragmatic realities, and perhaps we should understand why and take a step back.

What did hackers want? Access to computing. Computers were fascinating, mesmerizing and scarce. It wasn't a matter of software licenses, nobody cared about pieces of paper (or locked doors even), we wanted to be able to touch and tinker with the machine.

And everything was made to be tinker friendly in a golden age of computer hackerism, were kids like me could put sprites on a home television set by reading the c64 manual and playing with basic.
Nobody cared that the machine was not opensource, that the basic interpreter was licensed from Microsoft.

It was truly a huge movement, if we think about it for a second, even its tools were all about immediacy, graphics as a mean of direct feedback, live-coding.
We had one megahertz CPUs (in my times) working with size (not speed!) optimized interpreters.

Even at the ideological level, the goal was for everyone to have access, with systems like Lisp and even more clearly Smalltalk which were designed explicitly with the idea that the user was a tinkerer, always able to stop the world, inspect the inner workings, make some changes, and keep going.
We almost didn't have graphics, but it was in a way the golden age of graphics because people were mesmerized by the possibilities, especially excited about having immediate feedback loops, direct manipulation, fast iteration.


Sketchpad (Sutherland), which is mentioned in Alan Kay's
"The Early History of Smalltalk"

We lost all of this, basically all. We live in a time where it's impossible not to interface with a computer, computing is cheap and immensely powerful, yet it's nearly impossible to understand and contribute to it.

It is particularly interesting how we used to have the holy grail of live-coding on computers that shouldn't have been able to afford it, while today even the newest, fanciest languages focus primarily in being able to gobble up millions of lines of code in various modules while making iteration increasingly inefficient.

Not having direct access, the ability to stop the machine, list the code, modify and resume, was almost unthinkable. Not having an easily accessible programming language on your machine was unthinkable. 
Today what was once a given, sounds in most contexts like science fiction. QBasic is in many ways still an environment that can teach people many lessons...

And again, what I find especially remarkable is that we had so much abstraction and immediacy on machines that shouldn't have been able to afford it. The 80ies were a sort of golden era for interpreters and VMs.

We went the IBM way, and we probably didn't realize it. All that we do today is built for structured teams of thousands of engineers. We prioritize big batch development over individual productivity.
That's probably why we still have textual source (great for git and merging) over more expressive formats or even the old idea of serializing the entire state of a VM (again lisp, smalltalk) which sacrifices merging entirely to make hotpatching (dynamic software updates) trivial.


The sad and inspiring story of TempleOS,
a.k.a. what the Raspberry Pi should have been.

Now, to a degree this is entirely reasonable, when something becomes commoditized it's just another thing to be used, it loses its appeal. 
We buy cars and go to mechanics, right? We don't know how to peek inside the engines anymore...

But what is striking to me is how that ideology is completely lost as well, replaced with one that prioritizes theoretical freedoms over actual ones. 

We replaced the Commodore 64, which was entirely closed, proprietary yet hackable, with a linux-based monstrosity like the Raspberry-Pi, which is mostly opensource from what I understand (on the software side of things), yet might as well just be booting Windows and the vast majority of its uses would remain identical.
It's a cheap and fun toy for programmers, sure, but it mostly (entirely?) fails at making computation more accessible, which was its original goal.

In general, it feels like hacking is today dogmatic instead of pragmatic. Surely if everything was open-source... or distributed... or blockchain-based, immutable and lock-free with a pinch of functional programming... written this or this other way, then we would have a better, enlightened society. 

And it's not a joke, it's not an entirely a fringe phenomenon, there are vast arrays of engineers that are honestly invested in trying to change the world, but honestly think that solutions are to be found in the technical infrastructure of things. (by the way - wanna see something weird?)

Perhaps we didn't truly graduate from our a-social tendencies, perhaps we're true to form in thinking that the machine and technology are more interesting than people, and groups, and culture...

Whatever the causes, we have software and hardware systems that strive to be entirely open, yet time and again are closed ones that are more accessible in practice, that really drive social revolutions.
Linux didn't change the desktop, nor the way software is made. 

Look at my industry. Videogames. What did make games tinkerable? Liberated individual creativity, art, even the ability to make a living?

Steam, the Apple app store, Microsoft XBLIG, Youtube, Twitch, Spotify, Patreon... Unity, Pico8, Dreams passing through Minecraft and Roblox and the game modding community... Not the blockchain, not linux or torrents and so on.

Even the Demoscene, one of the last bastions of true hackerism, is completely uninterested in the ideology of software licenses and contracts.


Joseph White - Pico-8

And ironically, probably by utter coincidence, but ironically indeed, all the new power brokers of this era, the Facebooks and Amazons, the Googles and Twitters and so on, fully embrace opensource stacks, hundreds of millions lines of codes powering the AIs, the networks of today. 
The new IBMs do know very well that lines of code are for the most part worthless, but people and communities aren't, so it's a no brainer to opensource more if in change one gets more people involved in a project, and more engineers hired...

In the end, probably licenses don't mean much. And perhaps technology doesn't either. How we design our human-computer (and human-to-human) interfaces does. And if we don't start thinking about people and think that some lines of code or a contract can change the world, we'll be stuck in not understanding why we keep failing.

See also: This inspiring keynote by Andy Van Dam "Reflections on Unfinished Revolutions in Personal Computing", and the work of Brett Victor

06 July, 2016

How to spot potentially risky Kickstarters. Mighty No9 & PGS Lab

This is really off-topic for the blog, but I've had so many discussions about different gaming related Kickstarters that I feel the need to write a small guide. Even if this is probably the wrong place with the wrong audience...

Let's be clear, this is NOT going to be about how to make a successful Kickstarter campaign, actually, I'm going to use two examples (one of a past KS, and one of a campaign that is still open as I write) that are VERY successful. It's NOT even going to be about how to spot scams, and I can't say that either example is

But I want to show how to evaluate risks, and when it's best to use a good dose of skepticism because it seems that there is a lot of people that get caught in the "hype" for given products and end up regretting their choices.

The two example I'm mostly going to use are the following campaigns:
I could have picked others, but these came to mind. It's not a specific critique to these two though, and I know there are lots of people enjoying Mighty No.9, and I wish the best to PGS Labs, I hope they'll start by addressing the points below and proving my doubts unfounded.

The Team

This is absolutely the most important aspect, and it's clear why. On Kickstarter you are asked to give money to strangers, to believe in them, their skills and their product. 
Would you, in real life, give away a substantial amount of money to people, for an investment, without knowing anything about them? I doubt it.

So when you see a project this successful...


...first thought muse be, these guys must be AMAZING, right?


I kid you not, that's the ONLY information on the PGS Lab team. They have a website, but there is ZERO information on them there as well.


From their (over-filtered and out-of-sync) promo video, we learn one name of a guy...


"We have brought together incredible Japanese engineers and wonderful industrial designers". A straight quote from the video, the only other mention of the team. No names, no past projects, no CVs. But they are "wonderful", "incredible" and "Japanese", right?

This might be the team. Might be buddies of the guy in the middle...
For me, this is already a non-starter. But it seems mine is not a popular point of view...

The team?

So what about Mighty No.9 then? Certainly, Inafune has enough of a CV... And he even had a real team, right? He even did the bare minimum and put the key people on the Kickstarter page...



Or did he? Not so quickly...


This is the first thing I noticed in the original campaign. Inafune has a development team (Concept) but it seems that for this game, he did intend to outsource the work.

Unfortunately, not an unusual practice, it seems that certain big names in the industry are using their celebrity to easily raise money for projects they then outsource to third party developers.



Igarashi for Bloodstained did even "worse". Not only the game itself is outsourced, but the campaign, including the rewards and merchandise, are. In fact, if you look at the KS page, you'll notice some quite clashing art styles...


...I suspect this was due to the fact that different outsourcers worked on different parts of the campaign (concept art vs rewards/tiers).

Let's be clear, per se this is not a terrible thing, both Igarashi and Inafune used Inti Creates as the outsourcing partner that has plenty of experience with 2d scrollers, which means the end product might turn out great (in fact, the E3 demo of Bloodstained looks at least competent if not exceptional)... But it shows, to me, a certain lack of commitment.

People are thinking that these "celebrity" designers will put their careers on the line, against the "evil" publishers that are not funding their daring titles (facepalm), while they are just running a marketing campaign.

This became extremely evident for Inafune in particular, as he rushed launching a (luckily disastrous... apparently you can't fool people twice) second campaign in the middle of Mighty No.9 production, revealing his hand and how little commitment he had to the title.

The demo: demonstrating skills and commitment

Now, when you got the team down, you want to evaluate their skills. Past projects surely help, but what helps, even more, is showing a demo, a work-in-progress version of the product.

It's hard enough to deliver a new product even when you are perfectly competent, I've worked in games done by experienced professionals that just didn't end up making it, and I've backed Kickstarters that failed to deliver even if they were just "sequels" of products a given company was already selling... So you really shouldn't settle for anything less than concrete proof.

How does our Kickstarters fare in terms of demos?


PGS Labs show a prototype. GREAT! But wait...


Oh. So, the prototype is nothing ore than existing hardware, disassembled and reassembled in a marginally different shape. In fact you can see the PCBs of the controller they used, a joypad for tablets which they just opened, desoldered some buttons and moved them into a 3d printed shell.

Well, this would be great if we are talking about modding, but proves exactly NOTHING about their abilities to actually -make- the hardware (my guess - but it's just a guess, is that in the best scenario they are raising money to look for a Chinese ODM that already has similar products in their catalog, and they won't really do any engineering).

Of course, when it comes to the marketing campaigns of "celebrity designers" all you get is what is cheaper to make, they know they'll get millions anyways, so, just get some outsourcers to paint some concept art


It's really depressing to me how, by just creating a video with their faces, certain people can raise enormous amounts of money. And I know that there are lots of success stories, from acclaimed developers as well, but if you look at them, the pattern is clear: success comes from real teams of people deeply involved with the products, and with actual, proven, up-to-date skills in the craft.

While so far I'd say all the projects of older, lone "celebrities" have -all- resulted in games that are -at best- ok. Have we ever seen a masterpiece coming out from any of these? Dino Dini? Lord British

Personally, as a rule of thumb I'd rather give money to a "real" indie developer, who really can't just go to a publisher in lots of cases or even self-fund borrowing from a bank, and that often do MUCH, MUCH better games by real passion, sacrifice, and eating lots of instant noodles I assume...

The "gaming press"

What irks me a lot is that these campaigns are very successful because they feed on the laziness of news sites where hype spreads due to the underpaid human copy and paste bots who just repeat the same stuff over and over again. It's really a depressing job.

And even good websites, websites where I often go for game critique and intelligent insights, seem to be woefully unequipped to discuss anything about production, money, how the industry works. 

I'm not sure if it's because gaming journalists are less knowledgeable about production (but I really doubt it) or if it's because they prefer to keep a low profile (but... these topics do bring "clicks", right?).

Anyhow. I hope at least this can help a tiny bit :)

26 April, 2015

Sharing is caring

Knowledge > Code.

Code is cheap, code needs to be simple. Knowledge is expensive, so it makes lots of sense to share it. But, how do we share knowledge in our industry?

Nearly all you really see today is the following: a product ships, people wrap it up, and some good souls start writing presentations and notes which are then shared either internally or externally in conferences, papers and blogs.

This is convenient both because at the end of production people have more time on their hands for such activities, and because it's easier to get company approval for sharing externally after the product shipped.

What I want to show here are some alternative modalities and what they can be used for.

Showing without telling.

Nowadays an increasingly lost art, bragging has been the foundation of the demo-scene. Showing that something is possible, teasing others into a competition can be quite powerful.

The infamous fountain part from Stash/TBL

One of the worst conditions we sometimes put ourselves into is to stop imagining that things are possible. It's a curse that comes especially as a side-effect of experience, we become better at doing a certain thing consistently and predictably, but it can come at the cost of not daring trying crazy ideas.
Just knowing that someone did somehow achieve a given effect can be very powerful, it unlocks our minds from being stuck into negative thinking.

I always use as an example Crytek's SSAO in the first Crysis title, which was brought to my attention by an artist with a great "eye" while playing the game in the company I was working at the time. I immediately started thinking how realtime AO was possible, and the same day I quickly created a shader by modifying code from relief mapping which came close to what it was the actual technique (albeit as you can imagine much slower as it was actually ray marching the z-Buffer).

This is particularly useful if we want to incentive others into coming up with different solutions, engage their minds. It's also easy: it can be done early, it doesn't take much work and it doesn't come with the same IP issues as sharing your solution.

Open problems.

If you have some experience in this field, you have to assume we are still making lots of large mistakes. Year after year we learned that our colors were all wrong (2007: the importance of being linear), that our normals didn't mean what we thought (2008: care and feeding of normal vectors) and that they didn't mipmap the way we did (2010: lean mapping), that area lights are fundamental, that specular highlights have a tail and so on...

Actually you should know of many errors you are making right now even, probably some that are known but you were too lazy to fix, some you know you are handwaving away without strong error bounds, and many more you don't suspect yet; The rendering equation is beautifully hard to solve.

The rendering equation

We can't fix a problem we don't know is there, and I'm sure a lot of people have found valuable problems in their work that the rest of our community have overlooked. Yet it's lucky if we find an honest account of open problems as further research suggestions at the end of a publication.

Sharing problems is again a great way of creating discussion, engaging minds, and again easier to do than sharing full solutions, but even internally in a company it's hard to find such examples, people underestimate the importance of this information and sometimes our egos come into play even subconsciously we think we have to find a solution ourselves before we can present.

Hilbert knew better. Johan Andersson did something along these lines for the realtime rendering community, but even if EA has easily the best infrastructure and dedication to knowledge sharing I've ever seen discussion about open problems was uncommon (at least in my experience).



Establishing a new sharing pattern is hard, requires active dedication before it becomes part of a culture, and has to be rewarded.

The journey is the destination.

It's truly silly to explore an unknown landscape and mark only a single point of interest. We would map the entire journey, noting intersections, roads we didn't take, and ones we took and had to backtrack from. Even algorithms know better.

Hoarding information is cheap and useful, keeping notes as one works is something that in various forms everybody does, the only thing that is needed is to be mindful in saving progress through times, snapshots.

The main hurdle we face is really ego and expectations, I think. I've seen many people having problems sharing, even internally, work that is not yet "perfect" or thinking that certain information is not "worth" presenting.

Artists commonly share WIP.
Michelangelo's unfinished sculptures are fascinating.

Few people share work-in-progress of technical ideas in scientific fields, even when we do share information on the finished product, it's just not something we are used to see.

Internally it's easy and wise to share work-in-progress, and you really want people's ideas to come to you early in your work, not after you already wrote thousands of lines of code, just to find someone had a smarter solution or worse, already had code for the same thing, or was working at it at the same time.

Externally it's still great to tell about the project's history, what hurdles were found, what things were left unexplored. Often reading papers, with some experience, one can get the impression that certain things were needed to circumvent untold issues of what would otherwise seem to be more straightforward solutions.

Is it wise to let people wonder about these things, potentially exploring avenues that were already be found to not be productive? And on the other side sometimes documenting these avenues explicitly might make others have ideas on how to surpass a given hurdle in a different way. Also consider different people have different objectives and tradeoffs...

The value of failure.

If you do research, you fail. That is almost the definition of research work (and the reason for fast iteration), if you're not allowed to fail you're in production, not inventing something new. The important thing is to learn, and thus as we are learning, we can share.

Vice-versa, if your ideas are always or often great and useful, then probably you're not pushing yourself hard enough (not that it is necessarily a bad thing - often we have to do the things that we exactly know how to do, but that's not research).



When doing research do we spend most time implementing good solutions or dealing with mistakes? Failing is important, it means we are taking risks, exploring off the beaten path, it should be rewarded, but that doesn't mean there is an incredible value for people to encounter the same pitfalls over and over again.

Yet, failures don't have any space in our discussions. We hide them, as having found that a path is not viable is not a great information to share.

Even worse really, most ideas are not really entirely "bad". They might not work right now, or in the precise context they were formulated, but often we have failures on ideas that were truly worth exploring, and didn't pan out just because of some ephemeral contingencies.

Moreover this is again a kind of sharing that can be "easier", usually a company legal department has to be involved when we share our findings from shipped products, but fewer people would object if we talk about things that never shipped and never even -worked-.

Lastly, even when we communicate about things that actually do work, we should always document failure cases and downsides. This is really a no-brainer, it should be a requirement in any serious publication, it's just dishonest not to do so, and nothing is worst than having to implement a technique just to find all its issues that the author did not care to document.

P.S. Eric Haines a few days ago shared his view on sharing code as part of research projects. I couldn't agree more, so I'll link it here
The only remark I'd like to add is that while I agree that code doesn't even need to be easy to build to be useful, it is something that should be given priority to if possible. 
Having code that is easy to build is better than having pretty code, or even code that builds "prettily". Be extremely pragmatic.
I don't care if I have to manually edit some directories or download specific versions of libraries in specific directories, but I do hate if your "clean" build system wants me to install N binary dependencies just to spit a simple Visual Studio .sln you could have provided to begin with, because it means I probably won't have the patience to look at it...

12 December, 2013

Shit people say: graphics have "peaked"

If you think that rendering has peaked, it's probably good. Probably it means you're not too old and haven't lived through the history of 3d graphics, where at every step people thought that it couldn't get better. Or you're too old and don't remember anymore...

Really, if I think of myself on my 486sx playing Tie Fighter back then, shit couldn't get any better. And I remember Rebel Assault, the first game I bought when I had my first CD-rom reader. And so on and on (and no, I didn't play only Star Wars games, but at the time LucasArts was among the companies made all must-buy titles... until the 360 I've always been a "computer" gamer, nowadays I play only on consoles).

But but but, these new consoles launched and people aren't that "wowed" right? That surely means something. We peaked, it happened.

I mean, surely it is not that when the 360 and later PS3 came out games weren't looking incredibly much better than what we had on ps2, right? (if you don't follow the links, you won't get the sarcasm...). And certainly, certainly the PS2 launch titles (was touted as more powerful than a SGI... remember?) it blew late PS1 titles right out of the water. I mean, it wasn't just more resolution.

Maybe it's lack of imagination. As I wrote, I was the same, many times as I player I failed to imagine how it could get better. To a degree I think it's because video-game graphics, like all forms of art, "speak" to the people of their time, first and foremost. Even if some art might be "timeless" that doesn't imply that its meaning remains constant over time, it's really a cultural, aesthetic matter which evolves over time.
Now I take a different route, which I encourage to try. Just go out, walk. See the world, the days, the nights. Maybe pick up a camera... How does it feel? To me, working to improve rendering, it's amazing. Amazing! I could spend hours walking around and looking in awe and envy at the world we can't yet quite capture in games.
Now think if you could -play- reality, tell stories in it. Wouldn't it be a quite powerful device? Won't it be the foundation for a great game?

Stephen Shore, one of the masters of American color photography

Let me be explicit though, I'm not saying that realism is the only way, in the end we want to evoke emotions, and that can be done in a variety of ways, I'm well aware. Sometimes it's better to illustrate and let the brain fill in the blanks, emotions are tricky. Take that incredible masterpiece that is Kentucky Route Zero which manages to use flat-shaded vector graphics and still feel more real than many "photo-realistic" games. 
It's truly a game that every rendering engineer (and every other person too) should play, to be reminded of what are the goals we are working for: pushing the right buttons in the brain and trick it to remember or replay emotions it experienced in the real world. 
Other examples you might be more accustomed to are Call of Duty (most of them) and Red Dead Redemption, two games that are (even if it's very questionable actually) not as technically accomplished as some of the competition but manage to evoke and atmosphere that most other titles don't even come close to.

At the end of the day, photo-realism is just a "shortcut", as if we have something that spits realistic images for every angle and every lighting, it's easier to focus on the art, the same way that it's cheaper to film a movie rather than hand paint every frame. It's a set of constraints, a way of reducing the parameters space from the extreme of painting literally every pixel every frame to more and more procedural models where we "automate" a lot of the visual output and allow creativity to operate on the variables left free to tuning (i.e. lighting, cinematography and so on). 
It is -a- set of constraints, not the -only- one. It's just a matter of familiarity, as we're trying to fool our brains into firing the right combinations of neurons, it makes some sense to start with something that is recognizable as real, as our lives and experiences are drawn from real world. But different arguments could be made (i.e. that abstraction helps this process of recollection), this would be the topic of a different discussion. If your artists are more comfortable working in different frameworks there is a case to be made for alternatives, but when even Pixar agrees that physics are a good infrastructure for productive creativity then you have a quite strong "proof" that it's indeed a good starting point.


Diminishing returns... It's nonsense. Not that it doesn't exist as a phenomenon, but we are still far from being there in terms of effort vs quality, and there are many ways to mitigate it in asset production as well (money vs content, which will then hopefully relate to money). 
As I said, everyday I come back home from the office, and every day (or so) I'm amazed at the world (I'm in Vancouver, it's pretty here) and how far we still have to go to simulate all this... No, it's not going to be VR the next step (Oculus is amazing, truly, even if I'm still skeptical about a thing you have to wear and for which we have no good controls), there is still a lot to do on a 2d screen, both in rendering algorithms and in pure processing power. 
Yes we need more polygons please. Yes we need more resolution. And then more power on top of that to be able to simulate physics, and free our artists from the shackles of needing to eyeball parameters and hand-painted maps and so on...

And I don't even buy the fact that rendering is "ahead" and other things "lag" behind. How do you even make the comparison?
AI is "behind" because people in games are not as smart as humans? Well, quite unfair to the field, I mean, trying to make something look like a photo, versus something behave like a human, seems to be a bit easier to me.
Maybe you could say that animation is behind because well, things look much worse in motion than they do when they are static. But, not only part of that is a rendering problem, but it just says exactly that, things in motion are "harder" than static things, it doesn't mean that "motion" lags behind as a field...
Maybe you can say we implemented more novel techniques in rendering than we did in other fields, animation didn't change that much over they years, rendering changed more. I'm not entirely sure it's true, and I'm not entirely sure it means that much anyways, but yes, maybe we had more investment or some games did, to be more precise.

Anyhow, we still suck. We are just now beginning to understand the basics of what colors are, of what materials are, how light works. Measure, capture, model... We're so ignorant still. Not to mention on the technical side. Pathetic. We don't even know what to do with most of the hardware yet (compute shaders? for what?).

There could be an argument that spending more money on rendering is not worth it - because spending them on something else now gets us more bang for the buck, which is a variation of the "rendering is ahead" reasoning that doesn't hinge on actually measuring what is ahead of what. I could consider that, but really the reason for it is just that it's harder to disprove. But on the other hand, it's also completely random! 
Did we measure this? That would be actually fascinating! Can we devise an experiment where we can turn a "rendering" know and an "animation" or "gameplay" know and see what are people most sensitive to? I doubt it, seriously, but it would be awesome.
Maybe we could do some market research and come up with metrics that say that people buy more games if they have better animation over rendering, but... I think rendering actually markets better (that's why companies name and promote their rendering engines, but not their animation ones).

Lastly, you could say, it's better to spend money somewhere else just because it seems that rendering is expensive and maybe the same money just pays so much more innovation somewhere else. Maybe. This still needs ways of measuring things that can't be measured, but really the thing is some people are scared that asset costs will still go up and up. Not really "rendering" costs, but "art" costs. Well -rendering- actually is the way to -lower- art costs. 
No rendering technique is good if it doesn't serve art better, and unfortunately even there we still suck... We are mostly making art the same way we always did, triangles, UVs, manually splitting objects, creating LODs, grouping objects and so on. It's really sad, and really another reason to be optimistic about how much still we have to do in the future.

Now, I don't want to sound like I'm saying, I'm a rendering guy, my field is more relevant and all the money should go to it. Not at all! And actually I'm passionate of a lot of things, animation for example is fascinating as well... and who knows, maybe down the line I'll do stuff that it's completely different than what I'm doing today... I'm just annoyed that people say thing that are not really based in facts (and as we're at it, let's also dispel the myth that hardware progress is slowing down...).

Cheers.