Search this blog

26 June, 2022

Machines Arose

The era of algorithmic slavery.

When we think of the rise of the machines, we picture skynet and the matrix. Humanity literally fighting the AI, with big pew-pew guns, and getting enslaved by it. Heroes seeing through the deception, illuminated minds, perhaps looking insane to the average bystander, purposed with a higher calling.

We lose ourselves in the bombast of Hollywood, we take metaphors literally, we fear or dream of the singularity, look for signs of consciousness in the code we write.

A lesser known game from Bethesda... 2029 is close!

In reality, the danger is the opposite. It’s not how much consciousness the machines gain. It is rather, how much they remove from us. Yes, the recent "LaMDA is sentient" BS is not much more than a bad publicity stunt - but that doesn't mean that Google is not scary!

We are of course already dependent on machines - that is not the problem - our degree of attachment to them. We are dependent on all technology we create. It’s the defining feature of humanity to better itself through technology, it has been true since we made fire.

For millennia we have used technology to elevate ourselves, to free us from the minutiae of living and sublimate our spirit, enabling higher forms of creativity, allowing us to dedicate more time to work that is intellectual in nature.

You can call this productivity, even augmented intelligence - once we discovered that technology is not good simply to ease physical labor, but can be shaped into tools for better thinking.

Sougwen Chung (愫君) - Machines can be tools that augment our creativity. 

Is this trend going to end one day? Is is already ending?

Will we live in a world where it’s increasingly hard to be a value-add via the use of technology, but rather most of us will be made irrelevant by it? What happens to the masses that can’t produce anything of interest?

Can our creativity outpace the machine’s forever? 

https://www.youtube.com/watch?v=g9Z0pqsCUhY

One can argue that a tool remains a tool, and in the history of the world, short-sighted people always lamented when creation became more accessible, from painting to film photography, from film to digital cameras, from cameras to smartphones. 

There is always someone lamenting the loss of "true" art - and they are always wrong... but! At the same time, we have enough historical evidence of machines displacing jobs, labor having to learn new skills, often painfully, for the generations caught in the transition. 

There is some reason to worry, then - but it's not the key to the story here. Creativity is likely to remain firmly in the domain of humans, in fact one could say that a truly creative machine would need to be a conscious one, and that is not the scenario I'm interesting in.

The danger is subtler, closer and more real. 

Do we already live in a world where we many creators are replaceable slaves, being milked for content by algorithms that are the true holders of value?


AIs feed us during most of our days. Shodan's tools are videos of kittens, dogs and babies. And her minions are willingly joining, hoping for visibility and connection. 

It's a marvelous machine that exploits the brain chemistry of consumers with cheap dopamine, and of creators, as we seek to show our photos and videos for follows, we increasingly define our value in society by the number of likes we get.

How conscious are we, when most of our connections are software mediated, and sentiment analyzed? The algorithm does not know when to stop, and neither do our brains. Dopamine is the AI’s sugar.

We do not need to be intubated, in pods, to be enslaved. We don't even need to be slaves, once we created a system that gives some short-term pleasure, we willingly subjugate to it.

Don’t take your science fiction literally.

I don't fear the sentient AI and the singularity. I don't care much about privacy and crypto-anarchism. I think we are looking at the wrong problems. Even the worries about physicals changes in our cognitive abilities, psychology and looks might be overstated - as we are very plastic, we adapt.

And for how despicable the role of simplistic recommendation algorithms, shares and likes have on creating information bubbles and drive polarization, we are beginning to understand and rebel - systems might be tuned differently...

The existence of a system though, per se, and the fact that can be tuned - is that ever possibly moral? Are we not saying that we are losing agency, if the way a machine operates controls society?

This is a Silicon Valley problem that SV cannot solve for itself. It's the natural evolution of companies to want to be successful, and we are in a world where success means engaging billions of people, capturing a large percent of their time and attention.

These systems can hardly be called tools, and are clearly not in our control.

11 June, 2022

Real-time rendering - past, present and a probable future.

This presentation was a keynote given to a private company event - I'm not sure if I'm at liberty to say more about it - but the content is quite universal, so I hope you'll enjoy!

It does not talk directly of Roblox or the Metaverse... but at the same time, it has, near the end, some strong connections to it.

Slides here!

Also... this is not the first "open problems" slide deck I make, and I mentioned an unfinished one in previous presentations... I realize I will never finish it - or rather, I am not as passionate about it anymore, so... here it is, frozen in its eternal WIP state: slides - circa 2015



10 April, 2022

DOS Nostalgia: On using a modern DOS workstation.

Premise. 

This blog post is useless. And rambling. As it's useless the machine I'm typing this on, a Pentium 3 subnotebook from the 90ies. You have been warned!

But, it might be entertaining, and I suspect many of the people doing what I do and reading what I write, are in a similar demographic and might be starting to be nostalgic, thinking of their formative years and wondering if they're worth revisiting...

Objectives. 

I wanted to find a DOS machine, not for retrogaming (only), but to do actual "work". Even more narrowly, I had an idea of trying to compile an old DOS demo I made in the nineties, the only production of a short-lived Italian group called "day zero deflection" (you won't find it).

Monotasking. No internet. These things are so appealing to me right now. One tries to escape the dopamine rush of doomscrolling on all the connected devices that surround us. The flesh is weak, and instead of trying to muster the required willpower, shopping for a hardware solution seems so much more attractive. Of course, it's a fool's errand, but hey, I said this post was going to be useless.

A Long, intermezzo of personal history.

(skip this!) 

It's interesting how memory works. So non-linear, and unreliable. I used a lot of computers in my life, and I started early, I began programming around six or seven years old.

This past Christmas, as the pandemic eased up, I was able again to fly and spend time with my family in southern Italy. Found one of the Commodore 64 we had.

The c64 in question. Yes, it needed some love - albeit to my surprise, all my disks worked, with my childhood code! The video glitch is actually a quite mysterious defect, but it's a story for another time...

We, because I grew up with my older cousins, my mother is the last of eleven siblings, so I have a lot of cousins, many close to my house as my family used to be farmers, and thus had land that eventually became buildings, with many of my aunts and uncles ending up living in the same park.

These older cousins taught me programming, and I was using their computers before having my own. In fact, the c64 I found is most likely theirs, as mine was eventually donated to some relative that needed it more.

I remember a lot of this, in detail, albeit I don't know anymore what details are real and what ended up as images remixed from different time eras.

We were in the basement of my aunt's villa, just next door to the building I grew up in, where we had an apartment on the top floor. We would transfer things between the two by lowering a rope from the balcony down to the villa's garden. Later, when we had PCs and network cards, we moved bits between the buildings, having suspended a coax cable that ran from the second floor of my building (where another cousin lived) to my floor, to the villa.

The basement was originally the studio of my uncle, who was the town's priest. I was named after him. He and one of his sisters died in a car accident when I was little, so I am not sure I really remember of him, sadly.

But I remember the basement, the Commodore 64, and later an 8086 with an external hard drive the same size and shape as the main unit. An amber monitor monochrome I think, or perhaps it was both amber and green, with a configuration switch.

I remember all of the c64 games we played, easily. I remember bits of my coding journey, the books we used to study, and once my cousin being dismayed that I could not figure how to make a cursor move on the screen (the math to go to the next/previous row), even if it was mostly a misunderstanding.

I remember playing with my Amiga 600 there too, Body Blows - I switched to the Amiga after visiting... another cousin, this time, in Milan.

I remember the first Pentium they had because it allowed me to use more 3d graphics software. 3d studio 4 without having to resort to software 387 emulation! At the time I had an IBM PS/2 with a 486sx which the seller persuaded my father would be better than a 486dx another guy was offering us - who needs a math coprocessor, and IBM is a much better brand than something home-made... And I know that numerous times I lost all the data on these computers that I did not own, often by typing "format" too fast and putting the wrong drive letter in.

And then, nothing? Everything more modern than that I sort of lost, or rather, becomes more confused. I know the places I went shopping for (pirated) software and hardware, maybe some of the faces, not sure. 

I know used to lug my PC tower for the few kilometers that separated my house in Scafati from the "shop" (really a private apartment) that I used to go to in Pompei, as I was a kid, and did not have a car of course. 

And that tells me that I had lots of different PC configurations over the years, LOTS of them, AMD, Intel, Voodoo cards, a Matrox of some sorts, even a Sound Blaster AWE32 at a point, a CD-ROM and the early CD games, I remember the excitement for each new accessory and card, and the intense hate for cable and thermal management, especially on more modern setups. 

I remember scanners, the first were hand-held (Logitech ScanMan, then Trust), printers, joysticks, graphics tablets when I got into photography, the very first digital camera I had (I think an Olympus). It's all "PC" for me, I have no idea of what I was using in which year.

At a point, around university, I switched to primarily using laptops. Acer or Asus, something cheap and powerful but they would break often (cheap plastics). Then finally the MacBook Pro, and that one has remained a constant, still today my primary personal machine.

So. My nostalgia is about three machines, really, even if I had dozens. The Commodore 64, the one I remember the most. I am eager to play around with that one more, I ordered all sorts of HW, but I have no intentions to use it "daily" - that one belongs to a museum. 

The MisterFPGA c64 core is great and can output 50hz!

The Amiga, which for some reason I don't care as much for anymore, I suspect mostly because I was using it primarily for games so I did not create as much on it - I think that was the key.

I had some graphic programs, but I was not a great 2d artist (DeluxePaint) and I did not understand enough of the 3d tools I happened to get my hands on (Real3D, VistaPro)... and I did no coding on it. At a point, I had a (pirate) copy of Amos, but no manual.

Swapping disks, real or virtual, is also not fun.

And then the PC, specifically the 486sx that I used both for programming again (QBasic, PowerBasic, Assembly then C with DJGPP), for graphics (Imagine, then Lightwave among others), photography, the internet...

That 486 captures all of my PC memories, even if I know it's wrong. For example, during my C demo-coding times, I must have had a different computer, because the demo we were making would never run on a 486, they were sVGA, I even remember coding our sVGA layer, fixing a bug in the Matrox VESA bios - they were out of spec, not setting the viewport to be the same as the screen resolution when changing the latter, and many demos did run with the wrong line pitch because of that. Not mine! And the demo was, for some reason, writing buffers in separate R,G,B planes, with some MMX code I made to then shuffle them back into the display frame. 

So, it could not have been the 486 - but this is great, it gives me the freedom of not trying to recreate a particular setup but instead going for that same feeling and toolset I remember using, on an entirely different system. 

What do we "need"? 

Here's the plan. First and foremost, we'll get a laptop, because I don't have space in my apartment, no, in my life, for retrocomputing desktop or tower. Also, I want to go to hipster coffee shops and write on my hipster retro workstation, as I am doing right now. 

I planned, regardless of the machine I would end up getting, to rip out the cells from the battery pack and reconstruct it - batteries are mostly a liability in old computers and I prefer the weight savings of not having them - this also means, technically, "luggable" computers could be considered.

We will look for:

  • Something fast, because if I'm buying something it must be the best I can get! I don't even care about being period-accurate, this will be a monotasking monster, not a museum piece.
  • Something I can program on, because hey, what if I like it and want to make modern retro-demos? Ideally, this means a Pentium I, Pentium Pro, or Pentium MMX, beautiful in-order CPUs with predictable pipelines I still know how to cycle-count (sort-of). But anything less than the dreadful Pentium 4 will do, P2 and P3s are OOO but still understandable enough.
  • RAM is not an issue really, and we will max out whatever configuration we will settle on. 
  • Storage is not a problem either, because we will replace whatever HDD the machine comes with an SSD (yes, an actual SSD, albeit most people use compact-flash adapters instead) via an mSATA to PATA/IDE 2.5' enclosure which can fit any half-size ssd (I got a 64gb one just to be "safe" as you never know the limits of old motherboards and firmware. You do want to make sure that the machine did originally support hdds of a decent size (tens of gb) though.
  • DOS-compatible (SoundBlaster-compatible) soundcard, is a must.
  • A TFT screen, also is a must. The resolution doesn't really matter, but we want something as modern as possible because old LCDs were really terrible. Ideally, 640x480 would get us the best DOS compatibility, but in practice, it's not a problem.
  • Ideally an sVGA card with good VESA/VBE compatibility, and with good scaling from the VGA resolutions (640x480 text, 320x200 graphics) to whatever the LCD resolution is (that means, either integer-scaling and the right LCD resolution or good quality filters when upsampling).
  • An USB port is highly recommended, as we want to be able to plug in a USB storage device to easily transfer files from and to modern, internet-connected machines. Setting up networking, using PCMCIA cards, etc would be much more painful.
  • We want a good keyboard. And, because we can, we want something cool looking, maybe an iconic piece of design, not some random garbage brand. Also, something that is easy to service.
  • Reasonably priced. There is no way I burn 1000$ on this just because certain hardware is right now "hot", I find it borderline immoral. 

Expectations vs Reality.

After long, long deliberations, research on forums, scouting eBay and so on, I landed on an IBM ThinkPad 240x. The ThinkPads are amazing machines, easy to service, iconic, with great keyboards and the TrackPoint is useable in a pinch.

Beautiful! Pro-tip, a bit of 303 protectant makes the plastics look as new!

I paid around 200$ for it, you will see people getting these for 5$ at a garage sale or stuff like that, but I'm ok paying more for something that the seller verified it's running, has no issues, and so on. More than that I think is crazy, but you do you...

When it arrived it looked amazing. Yes, it had scratches on the top, and even some hairline cracks, one near a hinge and one on the bottom of the chassis, but these are not a problem as I planned to disassemble the thing anyway, see if I needed to clean the internals, replace batteries, check for any leak, re-apply thermal paste if needed and so on.

Regardless of how much research you have done, the reality of the actual machine will surprise you in good and bad ways.

All the hardware setup was trivial, and all the things I thought would be hard were not. 

I gutted the battery as planned (the cells were already a bit bulging). I feared the most for the initial OS setup, but my strategy worked flawlessly. I bought an IDE-to-USB adapter, connected the SSD in its SSD-to-IDE enclosure, and mapped it as a virtual drive in a VirtualBox VM with Windows 98

That allowed me to use Win98's fdisk and format to create something I knew would be recognized by the ThinkPad - I was not sure at all the same would have happened with modern tools. For extra safety, I also made two partitions under 2GB, to be able to format them with fat16, and the remainder of space was left in a third partition using fat32.

Installing the OS was a breeze, and Lenovo still hosts all the latest IBM drivers - Windows 98 just works.

The first tiny hurdle I had to overcome was with the firmware update, IBM tools are adamant about having a charged battery to perform the update... which I clearly did not have. But in reality, the tool just calls a second executable, and even if the binaries have different extensions than the default the flashing tools wanted, it did not take too long to figure out the right switches to use.

Upgrading the OS was also trivial, some people made install packs with all the official patches and lots of unofficial fixes (used mdgx ones, htasoft is an alternative), I just grabbed one and it mostly worked. The only issue I had is that the first time around the OS stopped booting with some DMA error, but disabling a specific patch having to do with enabling DMA on drives solved the issue. Re-installing the OS via the SSD is relatively fast, and I also used an old copy of Norton Ghost to create snapshots.

To my surprise, even USB in DOS mostly worked (via Bret Johnson's drivers, albeit many options exist). It is not 100% reliable, nor it's fast... but it does work! Same for the TrackPoint, via cutemouse.

I ended up with the classic config.sys/autoexec.bat multiple-choice menu for things like emm386 and so on, I remember these being so painful to deal with, but in this case, it was all easy, probably also because this machine has so much RAM. 

That is not to say there aren't problems. There are, but in a way, luckily for me, they seem to be unfixable, so I don't need to spend a ludicrous amount of time trying to overcome them (alright alright, I already did spend more time than it's worth, using DOSBox-debug and a few different decompilers to reverse an audio TSR... but I won't anymore I swear). And I did not foresee them.

First, there is the VGA. I obsessed over resolutions, because I knew, that most laptops of this time do not do resolution scaling well. I had an epiphany though that allowed me to stop worrying about it. It's true that ideally, 640x480 makes you not have to worry about scaling. But! Laptops with 640x480 screens tend to be incredibly crappy and small LCDs, so much so, that the unscaled 640x480 area on a more modern laptop (say, an 800x600 panel) ends up covering a bigger screen estate and looking better!

So, problem solved, right? Yes. If you get a card with good firmware! Unfortunately, the laptop I got has an obscure chipset that not only has crappy VESA/VBE support but is also not software-patchable via UniVBE

Some TSRs help a bit (vbeplus, fastvid), adding more modes by using other resolutions and forcing the viewport to clip, and you can play around with caching modes, but most DOS sVGA demos do not work. 

TBH, that was just plain unlucky, most laptops would not be this bad at sVGA... but expect I guess to find at least one bit of "unlucky" hardware you did not think about in your machine.

The other issue is with DOS audio and this is a biggie. 

Yes, I paid attention, and I got a chipset that does support DOS SoundBlaster emulation. But OMG, nobody told me it was going to be this crappy! It's basically useless, with most software just not working at all, especially when it comes to digital audio. The OPL3 FM music fares better, it tends to work, albeit it might not sound great.

It's sad but most DOS software, especially demos, have a much higher chance of running in Windows 98 than in pure DOS, as when Windows is loaded the audio emulation is much, much better.

This is something that apparently one simply has to live with. No PCI sound card has great DOS support, now I learned, especially with laptops, as DOS audio support for PCI relies on a combination of the right soundcard, the right motherboard and the right firmware. 

It doesn't help that often, when people online report audio working in DOS, they mean dos-under-windows, not pure dos... And you get a laptop from the pre-PCI era, then you're likely on a 486 or less, which not only will be worse in all other areas - but also many of these laptops used not to bundle any audio card at all, so they are strictly worse.

That's not to say that there are no Pentium laptops with built-in ISA audio - there are, and probably I was again unlucky with the 240x being a rare combination of a dos-compatible-ish PCI on a "bad" motherboard (apparently using the intel 440mx chipset which does not support DDMA), but again... expect some issues, there are no perfect laptops, and even back in the day, there was hardly a configuration that would run everything flawlessly...

Conclusions. 

Was it worth it? Should you do it? Yes and no...

It's small!

For retro gaming, or in general, passive consumption (demos, etc), it's overall a terrible idea, I'm pretty confident all laptops would be terrible, and even most desktops.

The early PC landscape was just a mess of incompatible devices, buggy, unpatched software, and crashes. You were lucky when things worked, and this is true today as well. DosBox is a million times more compatible than any real hardware. Yes, it has bugs, and lots of things can be more accurate, but on average it is better than real hardware.

There are many DosBox builds out there, and I'm sure this is going to be quicky outdated, but at the time of writing I recommend:

  • On Windows, primarily DosBox-X
    • I also keep vanilla for debugger-enabled builds - you can even get a dosbox plugin for ida pro, but that's for another time, and DosBox-ECE
  • On Mac, Boxer - Madds branch and vanilla DosBox on Mac
    • Last time I tried, DosBox-X had issues on Mac with the mouse emulation - might have been fixed by now.

On windows, and especially if you care about Windows of any kind, there is 86box (a fork of PCem) which is a lower-level, more accurate emulator. DosBox does not work great even with Win3.11, for some odd mouse emulation problems that seem to be different in each fork.

If like me, you want to experience a monotasking machine that you can grab for a few hours at a time to play with a simpler, more focused experience, then I'd say these laptops are great fun!

I'm even collecting a bit of a digital retro-library by mirroring old websites, often grabbed from the Wayback machine, and grabbing old magazines from the Internet Archive, to recreate the kind of reading materials I had back then...

Overall, setting this up took me less time and energy than tinkering with a Raspberry Pi or say, trying to install a fully functional Linux on a random contemporary laptop. It's one of the least annoying projects I have embarked upon.

My conscience feels ok too. It won't become garbage, I hate clutter, I hate having too much stuff, too many things I don't need in my life, especially digital crap that creates more problems than it really solves... With this one, I know I can sell or donate the hardware the moment I don't want to use it anymore, it's not going to be in a landfill, it's not another stupid gadget with a short lifespan.

The best part, all the software is portable, DOS doesn't really care about the hardware, you only need to replace a few lines in your config.sys if you have specific drivers... so I can migrate all I have on this laptop to a DosBox setup (even today I do keep the two in sync) or a different machine. 

Not bad. You want to try? Luckily it's easy, this is what I learned! You don't have to stress over the hardware (as I did), because none is perfect.

I went for something relatively "modern", a laptop that would have ran in its prime Windows 98/NT/2000 - and "downgraded" it to do mostly DOS - I think that's a good choice, but I don't think this ended up working much better or worse than any other option I was considering.

02 February, 2022

WTF is the Metaverse?!

Disclaimer! Yes, I work at Roblox. It's been a decade or so since I could pretend this space to be anonymous, and many years ago I made it clear that c0de517e/deadc0de = Angelo Pesce. And yes, my work makes me think about what this "metaverse" thing is more than the average person on the street (Roblox has been a metaverse company long, long before it was "cool"). I guess like an engineer at google might think about "the internet" more than the average person... But the following truly is not about what we are building at Roblox, which is something quite specific - these are my opinions, and other people might agree to some degree, and disagree with them.

I don't like hype cycles.

It is somewhat frustrating to see how supposedly experienced and rational people jump on the latest shiny bandwagon. At the same time, I guess it's comfortingly human. But that's a topic for another time...

Thing is, the metaverse is undoubtedly "hot" right now, so hot that every company, regardless of what they do, wants to have a claim to it. Mostly harmless, even cute, and for some, validating years of effort pushing these ideas... But, at the same time, it dilutes the concept, it makes words mean little to nothing when you can slap them onto any product.

So, let's give it a try and think really what is the metaverse, and how, if at all, is different from what we have today.

In the most general sense, "the metaverse" evokes ideas of synthetic, alternative places for social interactions, entertainment, perhaps even work... living our lives.

And let's set aside the possible dystopian scenarios - not the point of this, albeit, these are always important to seriously consider, while also reminding ourselves that they are levied against most society-affecting technology, from the printing press onwards.

This definition is just plain... boring!

It's boring because we have always been doing that, at least, since we had the ability to connect computers together. We are social animals, obviously, we want to imagine any new technology in a social space. BBS are alternative places for social interaction. And entertainment. And work. And from there on we had all kinds of shared virtual worlds, from IRC to the Mii Channel, from MUDs to World of Warcraft, from Club Penguin to Second Life, and so on. 

LucasFilm's Habitat. Now live!

The entire internet fits the bill, through that lens, and we don't need a new word for old ideas - outside marketing perhaps.

So, let's try to find some true meaning for this word. What's new now? Is it VR/AR/XR perhaps? Web 3.0 and NFTs? The "fediverse"?

Or perhaps there is nothing new really, but we just run out of ideas, explored the space of conventional social media startups already, and now trying to see if some old concept can be successful, throw a few things at the wall and see what sticks...

My thesis? Agency.

Agency is the real differentiating factor. 

Really, it's right there, staring at us. Like a high school kid facing an essay, sometimes it's good to look at the word itself, what does the dictionary tell us? Yes, we're going there: "In its most basic use, meta- describes a subject in a way that transcends its original limits, considering the subject itself as an object of reflection".

If you're controlling your virtual, alternative, synthetic universe, you are creating something that might be spectacular, engaging, entertaining, powerful... but it's not a metaverse. 

Videogames are not the metaverse, not even MMORPGs... Sandboxes/UGC/modding is not the metaverse. Virtual worlds are not the metaverse! 

Yes, I'm "disqualifying" Minecraft, Second Life, Gather.Town, GTA 5, Decentraland, Skyrim, Fortnite, Eve Online, the lot - not because of the quality of these products, but because we don't need new words for existing concepts, we really don't... 

Obviously, the line is somewhat blurry, but if you're making most of the rules you are "just" creating a world, with varying degrees of freedom.

A metaverse is an alternative living space (universe... world...) that is mostly owned by the participants, not centrally directed. Users create, share creations and make all of the rules (the meta- part).

Why does this distinction matter? Why is it interesting? 

At a shallow level, obviously, it gives you more variety, than a single virtual world. It has all the interesting implications of any platform where you do not control content. You are not really asking people to enter your world or use your product, you are really there to provide a service for others to create what they want to create and market it, form communities, and engage with them...

But I think it's more than that. This extra agency works to create a qualitatively different community, one that is centered around the creation and sharing of creations, an economy you might call it. Something quite different from passive consumption or social co-experience.

Ironically, through this lens, most of Web 3.0 "gets is wrong", focusing on decentralizing a transaction ledger of virtual ownership, but making that ownership be simply parts of strictly controlled virtual universes. You own a certificate to a plot of digital land that someone else created and controls.

Regardless of the fact that you only own the certificate, and not the actual land, which can disappear at any moment... these kinds of worlds seem at best a coat of paint over very old and limited concepts.

To me, even outside the blockchain, the entire notion of centralized versus decentralized systems, proprietary, closed versus interoperable open standards, all these concepts are really a "how", not a "what", they might be appropriate choices for a given product at a given time, but they should never be what the product "is".

Without wanting to sell the metaverse as the future, I personally think that these "fake" or "weak" metaverses, together with the current hype, are what pushes people away from something that could be truly interesting.

Note also that nothing of this idea of social creativity, giving a platform for people to create and share in others' creations, has to do with new technologies. 

You don't need VR for any of this. You don't need hand tracking, machine learning and 3d scanning, you don't even need 3d rendering at all! 

These are all tools that might or might not be appropriate, but you could have perfectly great metaverses that are text only if you wanted to (remember MUDs? add the "meta" part...). And at the same time, just because you have some cool 3d technology, it does not mean you have something for the metaverse...

E.g. you could have a server hosting community-created ROMs for a Commodore 64, add built-in networking to allow the ROMS to be about co-experience, add a pinch of persistence to allow people to express themselves, and you'd have a perfectly great, exciting metaverse... Or you could take something like UXN and the vision of permacomputing as the foundation, to reference something more contemporary...

BBS Door Games - more proto-metaverse-y than most of today's virtual worlds.

In summary, these are to me the key attributes of this metaverse idea:

  1. Inherently Social and interactive - as we are social animals and we want to inhabit spaces that allow socialization. This mostly means real-time networking, allowing users to connect, create and experience together.
  2. User-Created: participants have full agency over the worlds. Otherwise, you're just making a conventional virtual world. This is the "meta" part, you should not have control over the worlds, users should be able to take pieces of the universe and shape it, or completely subvert everything, own their creations. 
    • Litmus test: if your users are "playing X", then X is not a metaverse. If they are playing X in Y, then Y might be a metaverse :)
  3. Must have Shareable Persistence. Users should be able, in-universe, to store and share what they create - creating an economy, connecting worlds and people. And at the very least, the world must allow for a persistent, shared representation of self (Avatars). Otherwise, you're only making a piece of middleware, a game engine.

It's a social spin over the old, OG hacker's ethos of tinkering, creating with computers, owning their creations and sharing them. It has nothing to do with the particular implementation and it is not even about laws, copyright, or politics. It's a community that creates together, makes its own rules, and has full agency over these virtual creations. 

One more thing? In a truly creator-centric economy, you don't need to base all your revenue on ads, and the dark patterns they create.

Perhaps to shape that future it's more useful to revisit old, lost ideas, than thinking about shiny new overhyped toys. More SmallTalk's idea of Personal Computing and Plan 9, less NFTs and XR...

27 December, 2020

Why Raytracing won't simplify AAA real-time rendering.

"The big trick we are getting now is the final unification of lighting and shadowing across all surfaces in a game - games had to do these hacks and tricks for years now where we do different things for characters and different things for environments and different things for lights that move versus static lights, and now we are able to do all of that the same way for everything..."

Who said this?

Jensen Huang, presenting NVidia's RTX? 

Not quite... John Carmack. In 2001, at Tokyo's MacWorld, showing Doom 3 for the first time. It was though on an NVidia hardware, just a bit less powerful than today's 20xx/30xx series. A GeForce 3.

Can watch the recording on YouTube for a bit of nostalgia.

And of course, the unifying technology at that time was stencil shadows - yes, we were at a time before shadowmaps were viable.

Now. I am not a fan of making long-term predictions, in fact, I believe there is a given time horizon after which things are mostly dominated by chaos, and it's just silly to talk about what's going to happen then.

But if we wanted to make predictions, a good starting point is to look at the history, as history tends to repeat. What happened last time that we had significant innovation in rendering hardware? 

Did compute shaders lead to simpler rendering engines, or more complex? What happened when we introduced programmable fragment shaders? Simpler, or more complex? What about hardware vertex shaders - a.k.a. hardware transform and lighting...

And so on and so forth, we can go all the way back to the first popular accelerated video card for the consumer market, the 3dfx.

Memories... A 3dfx Voodoo. PCem has some emulation for these, if one wants to play...

Surely it must have made things simpler, not having to program software rasterizers specifically for each game, for each kind of object, for each CPU even! No more assembly. No more self-modifying code, s-buffers, software clipping, BSPs... 

No more crazy tricks to get textures on screen, we suddenly got it all done for us, for free! Z-buffer, anisotropic filtering, perspective correction... Crazy stuff we never could even dream of is now in hardware. 
Imagine that - overnight you could have taken the bulk of your 3d engine and deleted it. Did it make engines simpler, or more complex? 
Our shaders today, powered by incredible hardware, are much more code, and much more complexity, than the software rasterizers of decades ago!

Are there reasons to believe this time it will be any different?

Spoiler alert: no. 

At least not in AAA real-time rendering. Complexity has nothing to do with technologies. 
Technologies can enable new products,  true, but even the existence of new products is always about people first and foremost.

The truth is that our real-time rendering engines could have been dirt-simple ten years ago, there's nothing inherently complex in what we got right now.

Getting from zero to a reasonable, real-time PBR renderer is not hard. The equations are there, just render one light at a time, brute force shadowmaps, loop over all objects and shadows and you can get there. Use MSAA for antialiasing...
Of course, you would need to trade-off performance for such relatively "brute-force" approaches, and some quality... But it's doable, and will look reasonably good.

Even better? Just download Unreal, and hire -zero- rendering engineers. Would you not be able to ship any game your mind can imagine?

The only reason we do not... is in people and products. It's organizational, structural, not technical.

We like our graphics to be cutting edge as graphics and performance still sell games, sell consoles, are talked about.
And it's relatively inexpensive, in the grand scheme of things - rendering engineers are a small fraction of the engineering effort which in turn is not the most expensive part of making AAA games...

So pretty... Look at that sky. Worth its complexity, right?

In AAA is perfectly ok to have someone work for say, a month, producing new, complicated code paths to save say, one millisecond in our frame time. It's perfectly ok often to spend a month to save a tenth of a millisecond!
Until this equation will be true, we will always sacrifice engineering, and thus, accept bigger and bigger engines, more complex rendering techniques, in order to have larger, more beautiful worlds, rendered faster!

It has nothing to do with hardware nor it has anything to do with the inherent complexity of photorealistic graphics.
 
We write code because we're not in the business of making disruptive new games, AAA is not where risks are taken, it's where blockbuster productions are made. 

It's the nature of what we do, we don't run scrappy experimental teams, but machines with dozens of engineers and hundreds of artists. We're not trying to make the next Fortnite - that would require entirely different attitudes and methodologies.

And so, engineers gonna engineer, if you have a dozen rendering people on a game, its rendering will never be trivial - and once that's a thing that people do in the industry, it's hard not to do it, you have to keep competing on every dimension if you want to be at the top of the game.

The cyclic nature of innovation.


Another point of view, useful to make some prediction, comes from the classic works of Clayton Christensen on innovation. These are also mandatory reads if you want to understand the natural flow of innovation, from disruptive inventions to established markets.
 
One of the phenomena that Christensen observes is that technologies evolve in cycles of commoditization, bringing costs down and scaling, and de-commoditization, leveraging integrated, proprietary stacks to deliver innovation.

In AAA games, rendering has not been commoditized, and the trend does not seem going towards commoditization yet. 
Innovation is still the driving force behind real-time graphics, not scale of production, even if we have been saying for years, perhaps decades that we were at the tipping point, in practice we never seemed to reach it.

We are not even, at least in the big titles, close to the point where production efficiency for artists and assets are really the focus.
It's crazy to say, but still today our rendering teams typically dwarf the efforts put into tooling and asset production efficiency. 

We live in a world where it's imperative for most AAA titles to produce content at a steady pace. Yet, we don't see this percolating in the technology stack, look at the actual engines (if you have experience of them), look at the talks and presentations at conferences. We are still focusing on features, quality and performance more than anything else.

We do not like to accept tradeoffs on our stacks, we run on tightly integrated technologies because we like the idea of customizing them to the game specifics - i.e. we have not embraced open standards that would allow for components in our production stacks to be shared and exchanged.

Alita - rendered with Weta's proprietary (and RenderMan-compatible) Manuka

I do not think this trend will change, at the top end, for the next decade or so at least, the only time horizon I would even care to make predictions.
I think we will see a focus on efficiency of the artist tooling, this shift in attention is already underway - but engines themselves will only keep growing in complexity - same for rendering overall.

We see just recently, in the movie industry (which is another decent way of "predicting" the future of real-time) that production pipelines are becoming somewhat standardized around common interchange formats.
For the top studios, rendering itself is not, with most big ones running on their own proprietary path-tracing solutions...

So, is it all pain? And it will always be?

No, not at all! 

We live in a fantastic world full of opportunities for everyone. There is definitely a lot of real-time rendering that has been completely commoditized and abstracted.
People can create incredible graphics without knowing anything at all of how things work underneath, and this is definitely something incredibly new and exciting.

Once upon a time, you had to be John friggin' Carmack (and we went full circle...) to make a 3d engine, create Doom, and be legendary because of it. Your hardcore ability of pushing pixels made entire game genres that were impossible to create without the very best of technical skills.


Today? I believe a FPS templates ships for free with Unity, you can download Unreal with its source code for free, you have Godot... All products that invest in art efficiency and ease of use first and foremost.

Everyone can create any game genre with little complexity, without caring about technology - the complicated stuff is only there for cutting-edge "blockbuster" titles where bespoke engines matter, and only to some better features (e.g. fidelity, performance etc), not to fundamentally enable the game to exist...

And that's already professional stuff - we can do much better!

Three.js is the most popular 3d engine on github - you don't need to know anything about 3d graphics to start creating. We have Roblox, Dreams, Minecraft and Fortnite Creative. We have Notch, for real-time motion graphics...
Computer graphics has never been simpler, and at the same time, at the top end, never been more complex.

Roblox creations are completely tech-agnostic.

Conclusions

AAA will stay AAA - and for the foreseeable future it will keep being wonderfully complicated.
Slowly we will invest more in productivity for artists and asset production - as it really matters for games - but it's not a fast process.

It's probably easier for AAA to become relatively irrelevant (compared to the overall market size - that expands faster in other directions than in the established AAA one) - than for it to radically embrace change.

Other products and other markets is where real-time rendering is commoditized and radically different. It -is- already, all these products already exist, and we already have huge market segments that do not need to bother at all with technical details. And the quality and scope of these games grows year after year.

This market was facilitated by the fact that we have 3d hardware acceleration pretty much in any device now - but at the same time new hardware is not going to change any of that.

Raytracing will only -add- complexity at the top end. It might make certain problems simpler, perhaps (note - right now people seem to underestimate how hard is to make good RT-shadows or even worse, RT-reflections, which are truly hard...), but it will also make the overall effort to produce a AAA frame bigger, not smaller - like all technologies before it.
We'll see incredible hybrid techniques, and if we have today dozens of ways of doing shadows and combining signals to solve the rendering equation in real-time, we'll only grow these more complex - and wonderful, in the future.

Raytracing will eventually percolate to the non-AAA eventually too, as all technologies do. 

But that won't change complexity or open new products there either because people who are making real-time graphics with higher-level tools already don't have to care about the technology that drives them - technology there will always evolve under the hood, never to be seen by the users...