Search this blog

10 April, 2022

DOS Nostalgia: On using a modern DOS workstation.

Premise. 

This blog post is useless. And rambling. As it's useless the machine I'm typing this on, a Pentium 3 subnotebook from the 90ies. You have been warned!

But, it might be entertaining, and I suspect many of the people doing what I do and reading what I write, are in a similar demographic and might be starting to be nostalgic, thinking of their formative years and wondering if they're worth revisiting...

Objectives. 

I wanted to find a DOS machine, not for retrogaming (only), but to do actual "work". Even more narrowly, I had an idea of trying to compile an old DOS demo I made in the nineties, the only production of a short-lived Italian group called "day zero deflection" (you won't find it).

Monotasking. No internet. These things are so appealing to me right now. One tries to escape the dopamine rush of doomscrolling on all the connected devices that surround us. The flesh is weak, and instead of trying to muster the required willpower, shopping for a hardware solution seems so much more attractive. Of course, it's a fool's errand, but hey, I said this post was going to be useless.

A Long, intermezzo of personal history.

(skip this!) 

It's interesting how memory works. So non-linear, and unreliable. I used a lot of computers in my life, and I started early, I began programming around six or seven years old.

This past Christmas, as the pandemic eased up, I was able again to fly and spend time with my family in southern Italy. Found one of the Commodore 64 we had.

The c64 in question. Yes, it needed some love - albeit to my surprise, all my disks worked, with my childhood code! The video glitch is actually a quite mysterious defect, but it's a story for another time...

We, because I grew up with my older cousins, my mother is the last of eleven siblings, so I have a lot of cousins, many close to my house as my family used to be farmers, and thus had land that eventually became buildings, with many of my aunts and uncles ending up living in the same park.

These older cousins taught me programming, and I was using their computers before having my own. In fact, the c64 I found is most likely theirs, as mine was eventually donated to some relative that needed it more.

I remember a lot of this, in detail, albeit I don't know anymore what details are real and what ended up as images remixed from different time eras.

We were in the basement of my aunt's villa, just next door to the building I grew up in, where we had an apartment on the top floor. We would transfer things between the two by lowering a rope from the balcony down to the villa's garden. Later, when we had PCs and network cards, we moved bits between the buildings, having suspended a coax cable that ran from the second floor of my building (where another cousin lived) to my floor, to the villa.

The basement was originally the studio of my uncle, who was the town's priest. I was named after him. He and one of his sisters died in a car accident when I was little, so I am not sure I really remember of him, sadly.

But I remember the basement, the Commodore 64, and later an 8086 with an external hard drive the same size and shape as the main unit. An amber monitor monochrome I think, or perhaps it was both amber and green, with a configuration switch.

I remember all of the c64 games we played, easily. I remember bits of my coding journey, the books we used to study, and once my cousin being dismayed that I could not figure how to make a cursor move on the screen (the math to go to the next/previous row), even if it was mostly a misunderstanding.

I remember playing with my Amiga 600 there too, Body Blows - I switched to the Amiga after visiting... another cousin, this time, in Milan.

I remember the first Pentium they had because it allowed me to use more 3d graphics software. 3d studio 4 without having to resort to software 387 emulation! At the time I had an IBM PS/2 with a 486sx which the seller persuaded my father would be better than a 486dx another guy was offering us - who needs a math coprocessor, and IBM is a much better brand than something home-made... And I know that numerous times I lost all the data on these computers that I did not own, often by typing "format" too fast and putting the wrong drive letter in.

And then, nothing? Everything more modern than that I sort of lost, or rather, becomes more confused. I know the places I went shopping for (pirated) software and hardware, maybe some of the faces, not sure. 

I know used to lug my PC tower for the few kilometers that separated my house in Scafati from the "shop" (really a private apartment) that I used to go to in Pompei, as I was a kid, and did not have a car of course. 

And that tells me that I had lots of different PC configurations over the years, LOTS of them, AMD, Intel, Voodoo cards, a Matrox of some sorts, even a Sound Blaster AWE32 at a point, a CD-ROM and the early CD games, I remember the excitement for each new accessory and card, and the intense hate for cable and thermal management, especially on more modern setups. 

I remember scanners, the first were hand-held (Logitech ScanMan, then Trust), printers, joysticks, graphics tablets when I got into photography, the very first digital camera I had (I think an Olympus). It's all "PC" for me, I have no idea of what I was using in which year.

At a point, around university, I switched to primarily using laptops. Acer or Asus, something cheap and powerful but they would break often (cheap plastics). Then finally the MacBook Pro, and that one has remained a constant, still today my primary personal machine.

So. My nostalgia is about three machines, really, even if I had dozens. The Commodore 64, the one I remember the most. I am eager to play around with that one more, I ordered all sorts of HW, but I have no intentions to use it "daily" - that one belongs to a museum. 

The MisterFPGA c64 core is great and can output 50hz!

The Amiga, which for some reason I don't care as much for anymore, I suspect mostly because I was using it primarily for games so I did not create as much on it - I think that was the key.

I had some graphic programs, but I was not a great 2d artist (DeluxePaint) and I did not understand enough of the 3d tools I happened to get my hands on (Real3D, VistaPro)... and I did no coding on it. At a point, I had a (pirate) copy of Amos, but no manual.

Swapping disks, real or virtual, is also not fun.

And then the PC, specifically the 486sx that I used both for programming again (QBasic, PowerBasic, Assembly then C with DJGPP), for graphics (Imagine, then Lightwave among others), photography, the internet...

That 486 captures all of my PC memories, even if I know it's wrong. For example, during my C demo-coding times, I must have had a different computer, because the demo we were making would never run on a 486, they were sVGA, I even remember coding our sVGA layer, fixing a bug in the Matrox VESA bios - they were out of spec, not setting the viewport to be the same as the screen resolution when changing the latter, and many demos did run with the wrong line pitch because of that. Not mine! And the demo was, for some reason, writing buffers in separate R,G,B planes, with some MMX code I made to then shuffle them back into the display frame. 

So, it could not have been the 486 - but this is great, it gives me the freedom of not trying to recreate a particular setup but instead going for that same feeling and toolset I remember using, on an entirely different system. 

What do we "need"? 

Here's the plan. First and foremost, we'll get a laptop, because I don't have space in my apartment, no, in my life, for retrocomputing desktop or tower. Also, I want to go to hipster coffee shops and write on my hipster retro workstation, as I am doing right now. 

I planned, regardless of the machine I would end up getting, to rip out the cells from the battery pack and reconstruct it - batteries are mostly a liability in old computers and I prefer the weight savings of not having them - this also means, technically, "luggable" computers could be considered.

We will look for:

  • Something fast, because if I'm buying something it must be the best I can get! I don't even care about being period-accurate, this will be a monotasking monster, not a museum piece.
  • Something I can program on, because hey, what if I like it and want to make modern retro-demos? Ideally, this means a Pentium I, Pentium Pro, or Pentium MMX, beautiful in-order CPUs with predictable pipelines I still know how to cycle-count (sort-of). But anything less than the dreadful Pentium 4 will do, P2 and P3s are OOO but still understandable enough.
  • RAM is not an issue really, and we will max out whatever configuration we will settle on. 
  • Storage is not a problem either, because we will replace whatever HDD the machine comes with an SSD (yes, an actual SSD, albeit most people use compact-flash adapters instead) via an mSATA to PATA/IDE 2.5' enclosure which can fit any half-size ssd (I got a 64gb one just to be "safe" as you never know the limits of old motherboards and firmware. You do want to make sure that the machine did originally support hdds of a decent size (tens of gb) though.
  • DOS-compatible (SoundBlaster-compatible) soundcard, is a must.
  • A TFT screen, also is a must. The resolution doesn't really matter, but we want something as modern as possible because old LCDs were really terrible. Ideally, 640x480 would get us the best DOS compatibility, but in practice, it's not a problem.
  • Ideally an sVGA card with good VESA/VBE compatibility, and with good scaling from the VGA resolutions (640x480 text, 320x200 graphics) to whatever the LCD resolution is (that means, either integer-scaling and the right LCD resolution or good quality filters when upsampling).
  • An USB port is highly recommended, as we want to be able to plug in a USB storage device to easily transfer files from and to modern, internet-connected machines. Setting up networking, using PCMCIA cards, etc would be much more painful.
  • We want a good keyboard. And, because we can, we want something cool looking, maybe an iconic piece of design, not some random garbage brand. Also, something that is easy to service.
  • Reasonably priced. There is no way I burn 1000$ on this just because certain hardware is right now "hot", I find it borderline immoral. 

Expectations vs Reality.

After long, long deliberations, research on forums, scouting eBay and so on, I landed on an IBM ThinkPad 240x. The ThinkPads are amazing machines, easy to service, iconic, with great keyboards and the TrackPoint is useable in a pinch.

Beautiful! Pro-tip, a bit of 303 protectant makes the plastics look as new!

I paid around 200$ for it, you will see people getting these for 5$ at a garage sale or stuff like that, but I'm ok paying more for something that the seller verified it's running, has no issues, and so on. More than that I think is crazy, but you do you...

When it arrived it looked amazing. Yes, it had scratches on the top, and even some hairline cracks, one near a hinge and one on the bottom of the chassis, but these are not a problem as I planned to disassemble the thing anyway, see if I needed to clean the internals, replace batteries, check for any leak, re-apply thermal paste if needed and so on.

Regardless of how much research you have done, the reality of the actual machine will surprise you in good and bad ways.

All the hardware setup was trivial, and all the things I thought would be hard were not. 

I gutted the battery as planned (the cells were already a bit bulging). I feared the most for the initial OS setup, but my strategy worked flawlessly. I bought an IDE-to-USB adapter, connected the SSD in its SSD-to-IDE enclosure, and mapped it as a virtual drive in a VirtualBox VM with Windows 98

That allowed me to use Win98's fdisk and format to create something I knew would be recognized by the ThinkPad - I was not sure at all the same would have happened with modern tools. For extra safety, I also made two partitions under 2GB, to be able to format them with fat16, and the remainder of space was left in a third partition using fat32.

Installing the OS was a breeze, and Lenovo still hosts all the latest IBM drivers - Windows 98 just works.

The first tiny hurdle I had to overcome was with the firmware update, IBM tools are adamant about having a charged battery to perform the update... which I clearly did not have. But in reality, the tool just calls a second executable, and even if the binaries have different extensions than the default the flashing tools wanted, it did not take too long to figure out the right switches to use.

Upgrading the OS was also trivial, some people made install packs with all the official patches and lots of unofficial fixes (used mdgx ones, htasoft is an alternative), I just grabbed one and it mostly worked. The only issue I had is that the first time around the OS stopped booting with some DMA error, but disabling a specific patch having to do with enabling DMA on drives solved the issue. Re-installing the OS via the SSD is relatively fast, and I also used an old copy of Norton Ghost to create snapshots.

To my surprise, even USB in DOS mostly worked (via Bret Johnson's drivers, albeit many options exist). It is not 100% reliable, nor it's fast... but it does work! Same for the TrackPoint, via cutemouse.

I ended up with the classic config.sys/autoexec.bat multiple-choice menu for things like emm386 and so on, I remember these being so painful to deal with, but in this case, it was all easy, probably also because this machine has so much RAM. 

That is not to say there aren't problems. There are, but in a way, luckily for me, they seem to be unfixable, so I don't need to spend a ludicrous amount of time trying to overcome them (alright alright, I already did spend more time than it's worth, using DOSBox-debug and a few different decompilers to reverse an audio TSR... but I won't anymore I swear). And I did not foresee them.

First, there is the VGA. I obsessed over resolutions, because I knew, that most laptops of this time do not do resolution scaling well. I had an epiphany though that allowed me to stop worrying about it. It's true that ideally, 640x480 makes you not have to worry about scaling. But! Laptops with 640x480 screens tend to be incredibly crappy and small LCDs, so much so, that the unscaled 640x480 area on a more modern laptop (say, an 800x600 panel) ends up covering a bigger screen estate and looking better!

So, problem solved, right? Yes. If you get a card with good firmware! Unfortunately, the laptop I got has an obscure chipset that not only has crappy VESA/VBE support but is also not software-patchable via UniVBE

Some TSRs help a bit (vbeplus, fastvid), adding more modes by using other resolutions and forcing the viewport to clip, and you can play around with caching modes, but most DOS sVGA demos do not work. 

TBH, that was just plain unlucky, most laptops would not be this bad at sVGA... but expect I guess to find at least one bit of "unlucky" hardware you did not think about in your machine.

The other issue is with DOS audio and this is a biggie. 

Yes, I paid attention, and I got a chipset that does support DOS SoundBlaster emulation. But OMG, nobody told me it was going to be this crappy! It's basically useless, with most software just not working at all, especially when it comes to digital audio. The OPL3 FM music fares better, it tends to work, albeit it might not sound great.

It's sad but most DOS software, especially demos, have a much higher chance of running in Windows 98 than in pure DOS, as when Windows is loaded the audio emulation is much, much better.

This is something that apparently one simply has to live with. No PCI sound card has great DOS support, now I learned, especially with laptops, as DOS audio support for PCI relies on a combination of the right soundcard, the right motherboard and the right firmware. 

It doesn't help that often, when people online report audio working in DOS, they mean dos-under-windows, not pure dos... And you get a laptop from the pre-PCI era, then you're likely on a 486 or less, which not only will be worse in all other areas - but also many of these laptops used not to bundle any audio card at all, so they are strictly worse.

That's not to say that there are no Pentium laptops with built-in ISA audio - there are, and probably I was again unlucky with the 240x being a rare combination of a dos-compatible-ish PCI on a "bad" motherboard (apparently using the intel 440mx chipset which does not support DDMA), but again... expect some issues, there are no perfect laptops, and even back in the day, there was hardly a configuration that would run everything flawlessly...

Conclusions. 

Was it worth it? Should you do it? Yes and no...

It's small!

For retro gaming, or in general, passive consumption (demos, etc), it's overall a terrible idea, I'm pretty confident all laptops would be terrible, and even most desktops.

The early PC landscape was just a mess of incompatible devices, buggy, unpatched software, and crashes. You were lucky when things worked, and this is true today as well. DosBox is a million times more compatible than any real hardware. Yes, it has bugs, and lots of things can be more accurate, but on average it is better than real hardware.

There are many DosBox builds out there, and I'm sure this is going to be quicky outdated, but at the time of writing I recommend:

  • On Windows, primarily DosBox-X
    • I also keep vanilla for debugger-enabled builds - you can even get a dosbox plugin for ida pro, but that's for another time, and DosBox-ECE
  • On Mac, Boxer - Madds branch and vanilla DosBox on Mac
    • Last time I tried, DosBox-X had issues on Mac with the mouse emulation - might have been fixed by now.

On windows, and especially if you care about Windows of any kind, there is 86box (a fork of PCem) which is a lower-level, more accurate emulator. DosBox does not work great even with Win3.11, for some odd mouse emulation problems that seem to be different in each fork.

If like me, you want to experience a monotasking machine that you can grab for a few hours at a time to play with a simpler, more focused experience, then I'd say these laptops are great fun!

I'm even collecting a bit of a digital retro-library by mirroring old websites, often grabbed from the Wayback machine, and grabbing old magazines from the Internet Archive, to recreate the kind of reading materials I had back then...

Overall, setting this up took me less time and energy than tinkering with a Raspberry Pi or say, trying to install a fully functional Linux on a random contemporary laptop. It's one of the least annoying projects I have embarked upon.

My conscience feels ok too. It won't become garbage, I hate clutter, I hate having too much stuff, too many things I don't need in my life, especially digital crap that creates more problems than it really solves... With this one, I know I can sell or donate the hardware the moment I don't want to use it anymore, it's not going to be in a landfill, it's not another stupid gadget with a short lifespan.

The best part, all the software is portable, DOS doesn't really care about the hardware, you only need to replace a few lines in your config.sys if you have specific drivers... so I can migrate all I have on this laptop to a DosBox setup (even today I do keep the two in sync) or a different machine. 

Not bad. You want to try? Luckily it's easy, this is what I learned! You don't have to stress over the hardware (as I did), because none is perfect.

I went for something relatively "modern", a laptop that would have ran in its prime Windows 98/NT/2000 - and "downgraded" it to do mostly DOS - I think that's a good choice, but I don't think this ended up working much better or worse than any other option I was considering.

02 February, 2022

WTF is the Metaverse?!

Disclaimer! Yes, I work at Roblox. It's been a decade or so since I could pretend this space to be anonymous, and many years ago I made it clear that c0de517e/deadc0de = Angelo Pesce. And yes, my work makes me think about what this "metaverse" thing is more than the average person on the street (Roblox has been a metaverse company long, long before it was "cool"). I guess like an engineer at google might think about "the internet" more than the average person... But the following truly is not about what we are building at Roblox, which is something quite specific - these are my opinions, and other people might agree to some degree, and disagree with them.

I don't like hype cycles.

It is somewhat frustrating to see how supposedly experienced and rational people jump on the latest shiny bandwagon. At the same time, I guess it's comfortingly human. But that's a topic for another time...

Thing is, the metaverse is undoubtedly "hot" right now, so hot that every company, regardless of what they do, wants to have a claim to it. Mostly harmless, even cute, and for some, validating years of effort pushing these ideas... But, at the same time, it dilutes the concept, it makes words mean little to nothing when you can slap them onto any product.

So, let's give it a try and think really what is the metaverse, and how, if at all, is different from what we have today.

In the most general sense, "the metaverse" evokes ideas of synthetic, alternative places for social interactions, entertainment, perhaps even work... living our lives.

And let's set aside the possible dystopian scenarios - not the point of this, albeit, these are always important to seriously consider, while also reminding ourselves that they are levied against most society-affecting technology, from the printing press onwards.

This definition is just plain... boring!

It's boring because we have always been doing that, at least, since we had the ability to connect computers together. We are social animals, obviously, we want to imagine any new technology in a social space. BBS are alternative places for social interaction. And entertainment. And work. And from there on we had all kinds of shared virtual worlds, from IRC to the Mii Channel, from MUDs to World of Warcraft, from Club Penguin to Second Life, and so on. 

LucasFilm's Habitat. Now live!

The entire internet fits the bill, through that lens, and we don't need a new word for old ideas - outside marketing perhaps.

So, let's try to find some true meaning for this word. What's new now? Is it VR/AR/XR perhaps? Web 3.0 and NFTs? The "fediverse"?

Or perhaps there is nothing new really, but we just run out of ideas, explored the space of conventional social media startups already, and now trying to see if some old concept can be successful, throw a few things at the wall and see what sticks...

My thesis? Agency.

Agency is the real differentiating factor. 

Really, it's right there, staring at us. Like a high school kid facing an essay, sometimes it's good to look at the word itself, what does the dictionary tell us? Yes, we're going there: "In its most basic use, meta- describes a subject in a way that transcends its original limits, considering the subject itself as an object of reflection".

If you're controlling your virtual, alternative, synthetic universe, you are creating something that might be spectacular, engaging, entertaining, powerful... but it's not a metaverse. 

Videogames are not the metaverse, not even MMORPGs... Sandboxes/UGC/modding is not the metaverse. Virtual worlds are not the metaverse! 

Yes, I'm "disqualifying" Minecraft, Second Life, Gather.Town, GTA 5, Decentraland, Skyrim, Fortnite, Eve Online, the lot - not because of the quality of these products, but because we don't need new words for existing concepts, we really don't... 

Obviously, the line is somewhat blurry, but if you're making most of the rules you are "just" creating a world, with varying degrees of freedom.

A metaverse is an alternative living space (universe... world...) that is mostly owned by the participants, not centrally directed. Users create, share creations and make all of the rules (the meta- part).

Why does this distinction matter? Why is it interesting? 

At a shallow level, obviously, it gives you more variety, than a single virtual world. It has all the interesting implications of any platform where you do not control content. You are not really asking people to enter your world or use your product, you are really there to provide a service for others to create what they want to create and market it, form communities, and engage with them...

But I think it's more than that. This extra agency works to create a qualitatively different community, one that is centered around the creation and sharing of creations, an economy you might call it. Something quite different from passive consumption or social co-experience.

Ironically, through this lens, most of Web 3.0 "gets is wrong", focusing on decentralizing a transaction ledger of virtual ownership, but making that ownership be simply parts of strictly controlled virtual universes. You own a certificate to a plot of digital land that someone else created and controls.

Regardless of the fact that you only own the certificate, and not the actual land, which can disappear at any moment... these kinds of worlds seem at best a coat of paint over very old and limited concepts.

To me, even outside the blockchain, the entire notion of centralized versus decentralized systems, proprietary, closed versus interoperable open standards, all these concepts are really a "how", not a "what", they might be appropriate choices for a given product at a given time, but they should never be what the product "is".

Without wanting to sell the metaverse as the future, I personally think that these "fake" or "weak" metaverses, together with the current hype, are what pushes people away from something that could be truly interesting.

Note also that nothing of this idea of social creativity, giving a platform for people to create and share in others' creations, has to do with new technologies. 

You don't need VR for any of this. You don't need hand tracking, machine learning and 3d scanning, you don't even need 3d rendering at all! 

These are all tools that might or might not be appropriate, but you could have perfectly great metaverses that are text only if you wanted to (remember MUDs? add the "meta" part...). And at the same time, just because you have some cool 3d technology, it does not mean you have something for the metaverse...

E.g. you could have a server hosting community-created ROMs for a Commodore 64, add built-in networking to allow the ROMS to be about co-experience, add a pinch of persistence to allow people to express themselves, and you'd have a perfectly great, exciting metaverse... Or you could take something like UXN and the vision of permacomputing as the foundation, to reference something more contemporary...

BBS Door Games - more proto-metaverse-y than most of today's virtual worlds.

In summary, these are to me the key attributes of this metaverse idea:

  1. Inherently Social and interactive - as we are social animals and we want to inhabit spaces that allow socialization. This mostly means real-time networking, allowing users to connect, create and experience together.
  2. User-Created: participants have full agency over the worlds. Otherwise, you're just making a conventional virtual world. This is the "meta" part, you should not have control over the worlds, users should be able to take pieces of the universe and shape it, or completely subvert everything, own their creations. 
    • Litmus test: if your users are "playing X", then X is not a metaverse. If they are playing X in Y, then Y might be a metaverse :)
  3. Must have Shareable Persistence. Users should be able, in-universe, to store and share what they create - creating an economy, connecting worlds and people. And at the very least, the world must allow for a persistent, shared representation of self (Avatars). Otherwise, you're only making a piece of middleware, a game engine.

It's a social spin over the old, OG hacker's ethos of tinkering, creating with computers, owning their creations and sharing them. It has nothing to do with the particular implementation and it is not even about laws, copyright, or politics. It's a community that creates together, makes its own rules, and has full agency over these virtual creations. 

One more thing? In a truly creator-centric economy, you don't need to base all your revenue on ads, and the dark patterns they create.

Perhaps to shape that future it's more useful to revisit old, lost ideas, than thinking about shiny new overhyped toys. More SmallTalk's idea of Personal Computing and Plan 9, less NFTs and XR...

27 December, 2020

Why Raytracing won't simplify AAA real-time rendering.

"The big trick we are getting now is the final unification of lighting and shadowing across all surfaces in a game - games had to do these hacks and tricks for years now where we do different things for characters and different things for environments and different things for lights that move versus static lights, and now we are able to do all of that the same way for everything..."

Who said this?

Jensen Huang, presenting NVidia's RTX? 

Not quite... John Carmack. In 2001, at Tokyo's MacWorld, showing Doom 3 for the first time. It was though on an NVidia hardware, just a bit less powerful than today's 20xx/30xx series. A GeForce 3.

Can watch the recording on YouTube for a bit of nostalgia.

And of course, the unifying technology at that time was stencil shadows - yes, we were at a time before shadowmaps were viable.

Now. I am not a fan of making long-term predictions, in fact, I believe there is a given time horizon after which things are mostly dominated by chaos, and it's just silly to talk about what's going to happen then.

But if we wanted to make predictions, a good starting point is to look at the history, as history tends to repeat. What happened last time that we had significant innovation in rendering hardware? 

Did compute shaders lead to simpler rendering engines, or more complex? What happened when we introduced programmable fragment shaders? Simpler, or more complex? What about hardware vertex shaders - a.k.a. hardware transform and lighting...

And so on and so forth, we can go all the way back to the first popular accelerated video card for the consumer market, the 3dfx.

Memories... A 3dfx Voodoo. PCem has some emulation for these, if one wants to play...

Surely it must have made things simpler, not having to program software rasterizers specifically for each game, for each kind of object, for each CPU even! No more assembly. No more self-modifying code, s-buffers, software clipping, BSPs... 

No more crazy tricks to get textures on screen, we suddenly got it all done for us, for free! Z-buffer, anisotropic filtering, perspective correction... Crazy stuff we never could even dream of is now in hardware. 
Imagine that - overnight you could have taken the bulk of your 3d engine and deleted it. Did it make engines simpler, or more complex? 
Our shaders today, powered by incredible hardware, are much more code, and much more complexity, than the software rasterizers of decades ago!

Are there reasons to believe this time it will be any different?

Spoiler alert: no. 

At least not in AAA real-time rendering. Complexity has nothing to do with technologies. 
Technologies can enable new products,  true, but even the existence of new products is always about people first and foremost.

The truth is that our real-time rendering engines could have been dirt-simple ten years ago, there's nothing inherently complex in what we got right now.

Getting from zero to a reasonable, real-time PBR renderer is not hard. The equations are there, just render one light at a time, brute force shadowmaps, loop over all objects and shadows and you can get there. Use MSAA for antialiasing...
Of course, you would need to trade-off performance for such relatively "brute-force" approaches, and some quality... But it's doable, and will look reasonably good.

Even better? Just download Unreal, and hire -zero- rendering engineers. Would you not be able to ship any game your mind can imagine?

The only reason we do not... is in people and products. It's organizational, structural, not technical.

We like our graphics to be cutting edge as graphics and performance still sell games, sell consoles, are talked about.
And it's relatively inexpensive, in the grand scheme of things - rendering engineers are a small fraction of the engineering effort which in turn is not the most expensive part of making AAA games...

So pretty... Look at that sky. Worth its complexity, right?

In AAA is perfectly ok to have someone work for say, a month, producing new, complicated code paths to save say, one millisecond in our frame time. It's perfectly ok often to spend a month to save a tenth of a millisecond!
Until this equation will be true, we will always sacrifice engineering, and thus, accept bigger and bigger engines, more complex rendering techniques, in order to have larger, more beautiful worlds, rendered faster!

It has nothing to do with hardware nor it has anything to do with the inherent complexity of photorealistic graphics.
 
We write code because we're not in the business of making disruptive new games, AAA is not where risks are taken, it's where blockbuster productions are made. 

It's the nature of what we do, we don't run scrappy experimental teams, but machines with dozens of engineers and hundreds of artists. We're not trying to make the next Fortnite - that would require entirely different attitudes and methodologies.

And so, engineers gonna engineer, if you have a dozen rendering people on a game, its rendering will never be trivial - and once that's a thing that people do in the industry, it's hard not to do it, you have to keep competing on every dimension if you want to be at the top of the game.

The cyclic nature of innovation.


Another point of view, useful to make some prediction, comes from the classic works of Clayton Christensen on innovation. These are also mandatory reads if you want to understand the natural flow of innovation, from disruptive inventions to established markets.
 
One of the phenomena that Christensen observes is that technologies evolve in cycles of commoditization, bringing costs down and scaling, and de-commoditization, leveraging integrated, proprietary stacks to deliver innovation.

In AAA games, rendering has not been commoditized, and the trend does not seem going towards commoditization yet. 
Innovation is still the driving force behind real-time graphics, not scale of production, even if we have been saying for years, perhaps decades that we were at the tipping point, in practice we never seemed to reach it.

We are not even, at least in the big titles, close to the point where production efficiency for artists and assets are really the focus.
It's crazy to say, but still today our rendering teams typically dwarf the efforts put into tooling and asset production efficiency. 

We live in a world where it's imperative for most AAA titles to produce content at a steady pace. Yet, we don't see this percolating in the technology stack, look at the actual engines (if you have experience of them), look at the talks and presentations at conferences. We are still focusing on features, quality and performance more than anything else.

We do not like to accept tradeoffs on our stacks, we run on tightly integrated technologies because we like the idea of customizing them to the game specifics - i.e. we have not embraced open standards that would allow for components in our production stacks to be shared and exchanged.

Alita - rendered with Weta's proprietary (and RenderMan-compatible) Manuka

I do not think this trend will change, at the top end, for the next decade or so at least, the only time horizon I would even care to make predictions.
I think we will see a focus on efficiency of the artist tooling, this shift in attention is already underway - but engines themselves will only keep growing in complexity - same for rendering overall.

We see just recently, in the movie industry (which is another decent way of "predicting" the future of real-time) that production pipelines are becoming somewhat standardized around common interchange formats.
For the top studios, rendering itself is not, with most big ones running on their own proprietary path-tracing solutions...

So, is it all pain? And it will always be?

No, not at all! 

We live in a fantastic world full of opportunities for everyone. There is definitely a lot of real-time rendering that has been completely commoditized and abstracted.
People can create incredible graphics without knowing anything at all of how things work underneath, and this is definitely something incredibly new and exciting.

Once upon a time, you had to be John friggin' Carmack (and we went full circle...) to make a 3d engine, create Doom, and be legendary because of it. Your hardcore ability of pushing pixels made entire game genres that were impossible to create without the very best of technical skills.


Today? I believe a FPS templates ships for free with Unity, you can download Unreal with its source code for free, you have Godot... All products that invest in art efficiency and ease of use first and foremost.

Everyone can create any game genre with little complexity, without caring about technology - the complicated stuff is only there for cutting-edge "blockbuster" titles where bespoke engines matter, and only to some better features (e.g. fidelity, performance etc), not to fundamentally enable the game to exist...

And that's already professional stuff - we can do much better!

Three.js is the most popular 3d engine on github - you don't need to know anything about 3d graphics to start creating. We have Roblox, Dreams, Minecraft and Fortnite Creative. We have Notch, for real-time motion graphics...
Computer graphics has never been simpler, and at the same time, at the top end, never been more complex.

Roblox creations are completely tech-agnostic.

Conclusions

AAA will stay AAA - and for the foreseeable future it will keep being wonderfully complicated.
Slowly we will invest more in productivity for artists and asset production - as it really matters for games - but it's not a fast process.

It's probably easier for AAA to become relatively irrelevant (compared to the overall market size - that expands faster in other directions than in the established AAA one) - than for it to radically embrace change.

Other products and other markets is where real-time rendering is commoditized and radically different. It -is- already, all these products already exist, and we already have huge market segments that do not need to bother at all with technical details. And the quality and scope of these games grows year after year.

This market was facilitated by the fact that we have 3d hardware acceleration pretty much in any device now - but at the same time new hardware is not going to change any of that.

Raytracing will only -add- complexity at the top end. It might make certain problems simpler, perhaps (note - right now people seem to underestimate how hard is to make good RT-shadows or even worse, RT-reflections, which are truly hard...), but it will also make the overall effort to produce a AAA frame bigger, not smaller - like all technologies before it.
We'll see incredible hybrid techniques, and if we have today dozens of ways of doing shadows and combining signals to solve the rendering equation in real-time, we'll only grow these more complex - and wonderful, in the future.

Raytracing will eventually percolate to the non-AAA eventually too, as all technologies do. 

But that won't change complexity or open new products there either because people who are making real-time graphics with higher-level tools already don't have to care about the technology that drives them - technology there will always evolve under the hood, never to be seen by the users...

17 December, 2020

Hallucinations re: the rendering of Cyberpunk 2077

Introduction

Two curses befall rendering engineers. First, we lose the ability to look at reality without being constantly reminded of how fascinatingly hard it is to solve light transport and model materials.

Second, when you start playing any game, you cannot refrain from trying to reverse its rendering technology (which is particularly infuriating for multiplayer titles - stop shooting at me, I'm just here to look how rocks cast shadows!).

So when I bought Cyberpunk 2077 I had to look at how it renders a frame. It's very simple to take RenderDoc captures of it, so I had really no excuse.

The following are speculations on its rendering techniques, observations made while skimming captures, and playing a few hours.

It's by no means a serious attempt at reverse engineering. For that, I lack both the time and the talent. I also rationalize doing a bad job at this by the following excuse: it's actually better this way. 

I think it's better to dream about how rendering (or anything really) could be, just with some degree of inspiration from external sources (in this case, RenderDoc captures), rather than exactly knowing what is going on.

If we know, we know, there's no mystery anymore. It's what we do not know that makes us think, and sometimes we exactly guess what's going on, but other times we do one better, we hallucinate something new... Isn't that wonderful?

The following is mostly a read-through of a single capture. I did open a second one to try to fill some blanks, but so far, that's all.

This is the frame we are going to look at.

I made the captures at high settings, without RTX or DLSS as RenderDoc does not allow these (yet?). I disabled motionblur and other uninteresting post-fx and made sure I was moving in all captures to be able to tell a bit better when passes access previous frame(s) data.

I am also not relying on insider information for this. Makes everything easier and more fun.

The basics

At a glance, it doesn't take long to describe the core of Cyberpunk 2077 rendering.

It's a classic deferred renderer, with a fairly vanilla g-buffer layout. We don't see the crazy amount of buffers of say, Suckerpunch's PS4 launch Infamous:Second Son, nor complex bit-packing and re-interpretation of channels.

Immediately recognizable g-buffer layout
  • 10.10.10.2 Normals, with the 2-bit alpha reserved to mark hair
  • 10.10.10.2 Albedo. Not clear what the alpha is doing here, it seems to just be set to one for everything drawn, but it might be only the captures I got
  • 8.8.8.8 Metalness, Roughness, Translucency and Emissive, in this order (RGBA)
  • Z-buffer and Stencil. The latter seems to isolate object/material types. Moving objects are tagged. Skin. Cars. Vegetation. Hair. Roads. Hard to tell / would take time to identify the meaning of each bit, but you get the gist...

If we look at the frame chronologically, it starts with a bunch of UI draws (that I didn't investigate further), a bunch of copies from a CPU buffer into VS constants, then a shadowmap update (more on this later), and finally a depth pre-pass.

Some stages of the depth pre-pass.

This depth pre-pass is partial (not drawing the entire scene) and is only used to reduce the overdraw in the subsequent g-buffer pass.

Basically, all the geometry draws are using instancing and some form of bindless textures. I'd imagine this was a big part of updating the engine from The Witcher 3 to contemporary hardware. 

Bindless also makes it quite annoying to look at the capture in renderDoc unfortunately - by spot-checking I could not see too many different shaders in the g-buffer pass - perhaps a sign of not having allowed artists to make shaders via visual graphs? 

Other wild guesses: I don't see any front-to-back sorting in the g-buffer, and the depth prepass renders all kinds of geometries, not just walls, so it would seem that there is no special authoring for these (brushes, forming a BSP) - nor artists have hand-tagged objects for the prepass, as some relatively "bad" occluders make the cut. I imagine that after culling a list of objects is sorted by shader and from there instanced draws are dynamically formed on the CPU.

The opening credits do not mention Umbra (which was used in The Witcher 3) - so I guess CDPr rolled out their own visibility solution. Its effectiveness is really hard to gauge, as visibility is a GPU/CPU balance problem, but there seem to be quite a few draws that do not contribute to the image, for what's worth. It also looks like that at times the rendering can display "hidden" rooms, so it looks like it's not a cell and portal system - I am guessing that for such large worlds it's impractical to ask artists to do lots of manual work for visibility.

A different frame, with some of the pre-pass.
Looks like some non-visible rooms are drawn then covered by the floor - which might hint at culling done without old-school brushes/BSP/cell&portals?

Lastly, I didn't see any culling done GPU side, with depth pyramids and so on, no per-triangle or cluster culling or predicated draws, so I guess all frustum and occlusion culling is CPU-side.

Note: people are asking if "bad" culling is the reason for the current performance issues, I guess meaning on ps4/xb1. This inference cannot be done, nor the visibility system can be called "bad" - as I wrote already. FWIW - it seems mostly that consoles struggle with memory and streaming more than anything else. Who knows...

Let's keep going... After the main g-buffer pass (which seems to be always split in two - not sure if there's a rendering reason or perhaps these are two command buffers done on different threads), there are other passes for moving objects (which write motion vectors - the motion vector buffer is first initialized with camera motion).

This pass includes avatars, and the shaders for these objects do not use bindless (perhaps that's used only for world geometry) - so it's much easier to see what's going on there if one wants to.

Finally, we're done with the main g-buffer passes, depth-writes are turned off and there is a final pass for decals. Surprisingly these are pretty "vanilla" as well, most of them being mesh decals.

Mesh decals bind as inputs (a copy of) the normal buffer, which is interesting as one might imagine the 10.10.10 format was chosen to allow for easy hardware blending, but it seems that some custom blend math is used as well - something important enough to pay for the price of making a copy (on PC at least).

A mesh decal - note how it looks like the original mesh with the triangles that do not map to decal textures removed.

It looks like only triangles carrying decals are rendered, using special decal meshes, but other than that everything is remarkably simple. It's not bindless either (only the main static geometry g-buffer pass seems to be), so it's easier to see what's going on here.

At the end of the decal pass we see sometimes projected decals as well, I haven't investigated dynamic ones created by weapons, but the static ones on the levels are just applied with tight boxes around geometry, I guess hand-made, without any stencil-marking technique (which would probably not help in this case) to try to minimize the shaded pixels.

Projected decals do bind depth-stencil as input as well, obviously as they need the scene depth, to reconstruct world-space surface position and do the texture projection, but probably also to read stencil and avoid applying these decals on objects tagged as moving.

A projected decal, on the leftmost wall (note the decal box in yellow)

As for the main g-buffer draws, many of the decals might end up not contributing at all to the image, and I don't see much evidence of decal culling (as some tiny ones are draws) - but it also might depend on my chosen settings.

The g-buffer pass is quite heavy, but it has lots of detail and it's of course the only pass that depends on scene geometry, a fraction of the overall frame time. E.g. look at the normals on the ground, pushed beyond the point of aliasing. At least on this PC capture, textures seem even biased towards aliasing, perhaps knowing that temporal will resolve them later (which absolutely does in practice, rotating the camera often reveals texture aliasing that immediately gets resolved when stopped - not a bad idea, especially as noise during view rotation can be masked by motion blur).

1:1 crop of the final normal buffer

A note re:Deferred vs Forward+

Most state-of-the-art engines are deferred nowadays. Frostbite, Guerrilla's Decima, Call of Duty BO3/4/CW, Red Dead Redemption 2, Naughty Dog's Uncharted/TLOU and so on.

On the other hand, the amount of advanced trickery that Forward+ allows you is unparalleled, and it has been adopted by a few to do truly incredible rendering, see for example the latest Doom games or have a look at the mind-blowing tricks behind Call of Duty: Modern Warfare / Warzone (and the previous Infinity Warfare which was the first time that COD line moved from being a crazy complex forward renderer to a crazy complex forward+).

I think the jury is still out on all this, and as most thing rendering (or well, coding!) we don't know anything about what's optimal, we just make/inherit choices and optimize around them. 

That said, I'd wager this was a great idea for CP2077 - and I'm not surprised at all to see this setup. As we'll see in the following, CP2077 does not seem to have baked lighting, relying instead on a few magic tricks, most of which operating in screen-space.

For these to work, you need before lighting to know material and normals, so you need to write a g-buffer anyways. Also you need temporal reprojection, so you want motion vectors and to compute lighting effects in separate passes (that you can then appropriately reproject, filter and composite).

I would venture to say also that this was done not because of the need for dynamic GI - there's very little from what I've seen in terms of moving lights and geometry is not destructible. I imagine instead, this is because the storage and runtime memory costs of baked lighting would be too big. Plus, it's easier to make lighting interactive for artists in such a system, rather than trying to write a realtime path-tracer that accurately simulates what your baking system results would be...

Lastly, as we're already speculating things, I'd imagine that CDPr wanted really to focus on artists and art. A deferred renderer can help there in two ways. First, it's performance is less coupled with the number of objects and vertices on screen, as only the g-buffer pass depends on them, so artists can be a smidge less "careful" about these. 
Second, it's simpler, overall - and in an open-world game you already have to care about so many things, that having to carefully tune your gigantic foward+ shaders for occupancy is not a headache you want to have to deal with...

Lighting part 1: Analytic lights

Obviously, no deferred rendering analysis can stop at the g-buffer, we split shading in two, and we have now to look at the second half, how lighting is done.

Here things become a bit dicier, as in the modern age of compute shaders, everything gets packed into structures that we cannot easily see. Even textures can be hard to read when they do not carry continuous data but pack who-knows-what into integers.

Normal packing and depth pyramid passes.

Regardless, it's pretty clear that after all the depth/g-buffer work is said and done, a uber-summarization pass kicks in taking care of a bunch of depth-related stuff.

RGBA8 packed normal (&roughness). Note the speckles that are a tell-tale of best-fit-normal encoding.
Also, note that this happens after hair rendering - which we didn't cover.

It first packs normal and roughness into a RGBA8 using Crytek's lookup-based best-fit normal encoding, then it creates a min-max mip pyramid of depth values.

The pyramid is then used to create what looks like a volumetric texture for clustered lighting.

A slice of what looks like the light cluster texture, and below one of the lighting buffers partially computed. Counting the pixels in the empty tiles, they seem to be 16x16 - while the clusters look like 32x32?

So - from what I can see it looks like a clustered deferred lighting system. 

The clusters seem to be 32x32 pixels in screen-space (froxels), with 64 z-slices. The lighting though seems to be done at a 16x16 tile granularity, all via compute shader indirect dispatches.

I would venture this is because CS are specialized by both the materials and lights present in a tile, and then dispatched accordingly - a common setup in contemporary deferred rendering systems (e.g. see Call of Duty Black Ops 3 and Uncharted 4 presentations on the topic).

Analytic lighting pass outputs two RGBA16 buffers, which seems to be diffuse and specular contributions. Regarding the options for scene lights, I would not be surprised if all we have are spot/point/sphere lights and line/capsule lights. Most of Cyberpunk's lights are neons, so definitely line light support is a must.

You'll also notice that a lot of the lighting is unshadowed, and I don't think I ever noticed multiple shadows under a single object/avatar. I'm sure that the engine does not have limitations in that aspect, but all this points at lighting that is heavily "authored" with artists carefully placing shadow-casting lights. I would also not be surprised if the lights have manually assigned bounding volumes to avoid leaks.

Final lighting buffer (for analytic lights) - diffuse and specular contributions.

Lighting part 2: Shadows

But what we just saw does not mean that shadows are unsophisticated in Cyberpunk 2077, quite the contrary, there are definitely a number of tricks that have been employed, most of them not at all easy to reverse!

First of all, before the depth-prepass, there are always a bunch of draws into what looks like a shadowmap. I suspect this is a CSM, but in the capture I have looked at, I have never seen it used, only rendered into. This points to a system that updates shadowmaps over many frames, likely with only static objects?

Is this a shadowmap? Note that there are only a few events in this capture that write to it, none that reads - it's just used as a depth-stencil target, if RenderDoc is correct here...

These multi-frame effects are complicated to capture, so I can't say if there are further caching systems (e.g. see the quadtree compressed shadows of Black Ops 3) at play. 

One thing that looks interesting is that if you travel fast enough through a level (e.g. in a car) you can see that the shadows take some time to "catch up" and they fade in incrementally in a peculiar fashion. It almost appears like there is a depth offset applied from the sun point of view, that over time gets reduced. Interesting!

This is hard to capture in an image, but note how the shadow in time seems to crawl "up" towards the sun.

Sun shadows are pre-resolved into a screen-space buffer prior to the lighting compute pass, I guess to simplify compute shaders and achieve higher occupancy. This buffer is generated in a pass that binds quite a few textures, two of which look CSM-ish. One is clearly a CSM, with in my case five entries in a texture array, where slices 0 to 3 are different cascades, but the last slice appears to be the same cascade as slice 0 but from a slightly different perspective. 

There's surely a lot to reverse-engineer here if one was inclined to do the work!

The slices of the texture on the bottom (in red) are clearly CSM. The partially rendered slices in gray are a mystery. The yellow/green texture is, clearly, resolved screen-space sun shadows, I've never, so far, seen the green channel used in a capture.

All other shadows in the scene are some form of VSMs, computed again incrementally over time. I've seen 512x512 and 256x256 used, and in my captures, I can see five shadowmaps rendered per frame, but I'm guessing this depends on settings. Most of these seem only bound as render targets, so again it might be that it takes multiple frames to finish rendering them. One gets blurred (VSM) into a slice of a texture array - I've seen some with 10 slices and others with 20.

A few of the VSM-ish shadowmaps on the left, and artefacts of the screen-space raymarched contact shadows on the right, e.g. under the left arm, the scissors and other objects in contact with the plane...

Finally, we have what the game settings call "contact shadows" - which are screen-space, short-range raymarched shadows. These seem to be computed by the lighting compute shaders themselves, which would make sense as these know about lights and their directions...

Overall, shadows are both simple and complex. The setup, with CSMs, VSMs, and optionally raymarching is not overly surprising, but I'm sure the devil is in the detail of how all these are generated and faded in. It's rare to see obvious artifacts, so the entire system has to be praised, especially in an open-world game!

Lighting part III: All the rest...

Since booting the game for the first time I had the distinct sense that most lighting is actually not in the form of analytic lights - and indeed looking at the captures this seems to not be unfounded. At the same time, there are no lightmaps, and I doubt there's anything pre-baked at all. This is perhaps one of the most fascinating parts of the rendering.

First pass highlighted is the bent-cone AO for this frame, remaining passes do smoothing and temporal reprojection.

First of all, there is a very good half-res SSAO pass. This is computed right after the uber-depth-summarization pass mentioned before, and it uses the packed RGBA8 normal-roughness instead of the g-buffer one. 

It looks like it's computing bent normals and aperture cones - impossible to tell the exact technique, but it's definitely doing a great job, probably something along the lines of HBAO-GTAO. First, depth, normal/roughness, and motion vectors are all downsampled to half-res. Then a pass computes current-frame AO, and subsequent ones do bilateral filtering and temporal reprojection. The dithering pattern is also quite regular if I had to guess, probably Jorge's Gradient noise?

It's easy to guess that the separate diffuse-specular emitted from the lighting pass is there to make it easier to occlude both more correctly with the cone information.

One of many specular probes that get updated in an array texture, generating blurred mips.

Second, we have to look at indirect lighting. After the light clustering pass there are a bunch of draws that update a texture array of what appear to be spherically (or dual paraboloid?) unwrapped probes. Again, this is distributed across frames, not all slices of this array are updated per frame. It's not hard to see in captures that some part of the probe array gets updated with new probes, generating on the fly mipmaps, presumably GGX-prefiltered. 

A mysterious cubemap. It looks like it's compositing sky (I guess that dynamically updates with time of day) with some geometry. Is the red channel an extremely thing g-buffer?

The source of the probe data is harder to find though, but in the main capture I'm using there seems to be something that looks like a specular cubemap relighting happening, it's not obvious to me if this is a different probe from the ones in the array or the source for the array data later on. 

Also, it's hard to say whether or not these probes are hand placed in the level, if the relighting assumption is true, then I'd imagine that the locations are fixed, and perhaps artist placed volumes or planes to define the influence area of each probe / avoid leaks.

A slice of the volumetric lighting texture, and some disocclusion artefacts and leaks in a couple of frames.

We have your "standard" volumetric lighting, computed in a 3d texture, with both temporal reprojection. The raymarching is clamped using the scene depth, presumably to save performance, but this, in turn, can lead to leaks and reprojection artifacts at times. Not too evident though in most cases.

Screen-Space Reflections

Now, things get very interesting again. First, we have an is an amazing Screen-Space Reflection pass, which again uses the packed normal/roughness buffer and thus supports blurry reflections, and at least at my rendering settings, is done at full resolution. 

It uses previous-frame color data, before UI compositing for the reflection (using motion vectors to reproject). And it's quite a lot of noise, even if it employs a blue-noise texture for dithering!

Diffuse/Ambient GI, reading a volumetric cube, which is not easy to decode...

Then, a indirect diffuse/ambient GI. Binds the g-buffer and a bunch of 64x64x64 volume textures that are hard to decode. From the inputs and outputs one can guess the volume is centered around the camera and contains indices to some sort of computed irradiance, maybe spherical harmonics or such. 

The lighting is very soft/low-frequency and indirect shadows are not really visible in this pass. This might even by dynamic GI!

Certainly is volumetric, which has the advantage of being "uniform" across all objects, moving or not, and this coherence shows in the final game.

Final lighting composite, diffuse plus specular, and specular-only.

And finally, everything gets composited together: specular probes, SSR, SSAO, diffuse GI, analytic lighting. This pass emits again two buffers, one which seems to be final lighting, and a second with what appears to be only the specular parts.

And here is where we can see what I said at the beginning. Most lighting is not from analytic lights! We don't see the usual tricks of the trade, with a lot of "fill" lights added by artists (albeit the light design is definitely very careful), instead indirect lighting is what makes most of the scene. This indirect lighting is not as "precise" as engines that rely more heavily on GI bakes and complicated encodings, but it is very uniform and regains high-frequency effects via the two very high-quality screen-space passes, the AO and reflection ones.


The screen-space passes are quite noisy, which in turn makes temporal reprojection really fundamental, and this is another extremely interesting aspect of this engine. Traditional wisdom says that reprojection does not work in games that have lots of transparent surfaces. The sci-fi worlds of Cyberpunk definitely qualify for this, but the engineers here did not get the news and made things work anyway!

And yes, sometimes it's possible to see reprojection artifact, and the entire shading can have a bit of "swimming" in motion, but in general, it's solid and coherent, qualities that even many engines using lightmaps cannot claim to have. Light leaks are not common, silhouettes are usually well shaded, properly occluded.

All the rest

There are lots of other effects in the engine we won't cover - for brevity and to keep my sanity. Hair is very interesting, appearing to render multiple depth slices and inject itself partially in the g-buffer with some pre-lighting and weird normal (fake anisotropic?) effect. Translucency/skin shading is surely another important effect I won't dissect.

Looks like charts caching lighting...

Before the frame is over though, we have to mention transparencies - as more magic is going on here for sure. First, there is a pass that seems to compute a light chart, I think for all transparencies, not just particles.

Glass can blur whatever is behind them, and this is done with a specialized pass, first rendering transparent geometry in a buffer that accumulates the blur amount, then a series of compute shaders end up creating three mips of the screen, and finally everything is composited back in the scene.


After the "glass blur", transparencies are rendered again, together with particles, using the lighting information computed in the chart. At least at my rendering settings, everything here is done at full resolution.

Scene after glass blur (in the inset) and with the actual glass rendered on top (big image)

Finally, the all-mighty temporal reprojection. I would really like to see the game without this, the difference before and after the temporal reprojection is quite amazing. There is some sort of dilated mask magic going on, but to be honest, I can't see anything too bizarre going on, it's astonishing how well it works. 

Perhaps there are some very complicated secret recipes lurking somewhere in the shaders or beyond my ability to understand the capture.

On the left, current and previous frame, on the right, final image after temporal reprojection.

This is from a different frame, a mask that is used for the TAA pass later on...

I wrote "finally" because I won't look further, i.e. the details of the post-effect stack, things here are not too surprising. Bloom is a big part of it, of course, almost adding another layer of indirect lighting, and it's top-notch as expected, stable, and wide. 

Depth of field, of course, tone-mapping and auto-exposure... There are of course all the image-degradation fixings you'd expect and probably want to disable: film grain, lens flares, motion blur, chromatic aberration... Even the UI compositing is non-trivial, all done in compute, but who has the time... Now that I got all this off my chest, I can finally try to go and enjoy the game! Bye!