tag:blogger.com,1999:blog-69508335315629422892024-03-16T11:49:53.236-07:00C0DE517EComputer Science && Computer Graphics && AlteraDEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.comBlogger352125tag:blogger.com,1999:blog-6950833531562942289.post-70143305647748774652024-02-19T16:32:00.004-08:002024-02-19T16:32:31.054-08:00Peaked technologies.<p> <span style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"> </span><b style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;">Read the article here: </b><a href="https://c0de517e.com/012_peak_tech.htm">https://c0de517e.com/012_peak_tech.htm</a> </p><p style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"></p><div class="separator" style="clear: both; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px; text-align: center;"><div class="separator" style="clear: both;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhmwx-A11Dm_SC_GsuWib9BIPVzXGZX7tFNlWidydITkSqGw2k23jqfZL5J6w_LojOKCxgF_CBG_IZC2JtHb2Ilwb8dMcTfHnkBbYXrKxd8oehwbVFNJRMT7ZPXOWsyCIK6cpSzDYn6r-Ka9oxIuZ8XaCQbCVC45t0q-252_yHk_FFMBwL_jp3d0tK6ZcRJ" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="1970" data-original-width="1972" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEhmwx-A11Dm_SC_GsuWib9BIPVzXGZX7tFNlWidydITkSqGw2k23jqfZL5J6w_LojOKCxgF_CBG_IZC2JtHb2Ilwb8dMcTfHnkBbYXrKxd8oehwbVFNJRMT7ZPXOWsyCIK6cpSzDYn6r-Ka9oxIuZ8XaCQbCVC45t0q-252_yHk_FFMBwL_jp3d0tK6ZcRJ" width="240" /></a></div><br /><br /></div></div><p style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>This blogspot site is dead! </b></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>Update your links (and RSS!) to my new blog at c0de517e.com.</b></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-34566507432733093682024-01-26T21:26:00.002-08:002024-01-26T21:26:10.790-08:00Portals are misunderstood.<p> <b style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;">Read the article here: </b><a href="https://c0de517e.com/011_portals.htm">https://c0de517e.com/011_portals.htm</a></p><p style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"></p><div class="separator" style="clear: both; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhX6wwcJQlGOtS0OpzJyilDHX-yBBHTln_-2G8GqUNsqmHv5zvtdfWx_qkzoqRHm3pGovGRHaZ38bdqdyrG0f6WNxdgSfcVDJqmtExvam9POWnP24nSWa7rm3SspCleywwwYhCl2uYL2ab70MpKdF2w39ogRjRvESRczsDzqFOrd2_wQ7ptekqS3rKz3GkT" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="1944" data-original-width="1906" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEhX6wwcJQlGOtS0OpzJyilDHX-yBBHTln_-2G8GqUNsqmHv5zvtdfWx_qkzoqRHm3pGovGRHaZ38bdqdyrG0f6WNxdgSfcVDJqmtExvam9POWnP24nSWa7rm3SspCleywwwYhCl2uYL2ab70MpKdF2w39ogRjRvESRczsDzqFOrd2_wQ7ptekqS3rKz3GkT" width="235" /></a></div><br /></div><p style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>This blogspot site is dead! </b></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>Update your links (and RSS!) to my new blog at c0de517e.com.</b></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-9525063221036219632024-01-17T20:33:00.004-08:002024-01-17T20:33:50.060-08:00The art and joy of well-architected "bad code".<p> <b style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;">Read the article here: </b><a href="https://c0de517e.com/009_website_joy.htm">https://c0de517e.com/009_website_joy.htm</a> </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhhi0MSrApsXFATvcid8vNEOScCWnZXX5re5A5-AUPXPqhAsGtFfpNpCKxDfmTvOHqCcYsk3_0B4nIMW1mrscMsWkiaUUnbor2EOZFI276OvzGdRg0d75IY_MbWOOlxThlorONtAQg3khrkzAd-I7MSGuHA2r0awIX37WIU86pVKrOG1XgI1f8CxVJLGjzZ" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="474" data-original-width="650" height="233" src="https://blogger.googleusercontent.com/img/a/AVvXsEhhi0MSrApsXFATvcid8vNEOScCWnZXX5re5A5-AUPXPqhAsGtFfpNpCKxDfmTvOHqCcYsk3_0B4nIMW1mrscMsWkiaUUnbor2EOZFI276OvzGdRg0d75IY_MbWOOlxThlorONtAQg3khrkzAd-I7MSGuHA2r0awIX37WIU86pVKrOG1XgI1f8CxVJLGjzZ" width="320" /></a></div><br /><p></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>This blogspot site is dead! </b></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>Update your links (and RSS!) to my new blog at c0de517e.com.</b></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-24919431952913288702023-12-08T11:27:00.001-08:002023-12-08T11:27:03.698-08:00Exploring the design space of "remote scene approximation"<p><b style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;">Read the article here: </b><a href="https://c0de517e.com/007_impostors.htm">Exploring the design space of "remote scene approximation". (c0de517e.com)</a></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>This blogspot site is dead! </b></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>Update your links (and RSS!) to my new blog at c0de517e.com.</b></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b></b></p><div class="separator" style="clear: both; text-align: center;"><b><a href="https://blogger.googleusercontent.com/img/a/AVvXsEi_vWEOp1baZg8xwdJnfZIs3sIRHUvyNsYmeS4V5wv3LSqgvvfGpumqO0TU3aJzL0cDGw2SKQURKf5ldIZ2Bt-DwCK8cJaDxiS_jsrFTRLFew2jXiR8Pvv2NSPNuTTlereFywMxCkOIauILWR5wu0WGSppnubiGcddGpoZFvXkIG-Ao-23sH47nCB7jIJPY" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="890" data-original-width="960" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEi_vWEOp1baZg8xwdJnfZIs3sIRHUvyNsYmeS4V5wv3LSqgvvfGpumqO0TU3aJzL0cDGw2SKQURKf5ldIZ2Bt-DwCK8cJaDxiS_jsrFTRLFew2jXiR8Pvv2NSPNuTTlereFywMxCkOIauILWR5wu0WGSppnubiGcddGpoZFvXkIG-Ao-23sH47nCB7jIJPY" width="259" /></a></b></div><b><br /><br /></b><p></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-84881127922713182482023-10-03T12:46:00.002-07:002023-10-03T12:46:12.586-07:00From the archive: Notes on environment lighting occlusion.<p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"> <b>Read the article here: </b><a href="https://c0de517e.com/006_cubemap_occlusion.htm" style="background-color: transparent;">From the archive: Notes on environment lighting occlusion. (c0de517e.com)</a></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>This blogspot site is dead! </b></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>Update your links (and RSS!) to my new blog at c0de517e.com.</b></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-17302395477564958022023-09-25T13:46:00.000-07:002023-09-25T13:46:07.348-07:00Writing to stimulate the brain, and to quiet it.<p> <b style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;">Read the article here: </b><a href="https://c0de517e.com/005_on_paper.htm">Writing to stimulate the brain, and to quiet it. (c0de517e.com)</a></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>This blogspot site is dead! </b></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>Update your links (and RSS!) to my new blog at c0de517e.com.</b></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-15562519707429068962023-09-14T11:58:00.001-07:002023-09-14T11:58:12.729-07:00WASMtoy<p><b style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;">Read the article here: </b><a href="https://c0de517e.com/004_wasmtoy.htm">Crap: WASMtoy. (c0de517e.com)</a></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>This blogspot site is dead! </b></p><p style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;"><b>Update your links (and RSS!) to my new blog at c0de517e.com.</b></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-44893822582306989402023-09-09T14:49:00.002-07:002023-09-09T14:49:10.932-07:0020x1000 Use(r)net Archive.<p><b> Read the following article here: <a href="https://c0de517e.com/003_usenet_archive.htm">20x1000 Use(r)net Archive. (c0de517e.com)</a></b></p><p><b>This blog is dead! Update your links (and RSS!) to c0de517e.com.</b></p><p><b>Below you will find a draft version of the post, all images, formatting and links will be missing here as I moved to my new system.</b></p><p>20x1000 Use(r)net Archive.</p><p>An investigation of the old web.</p><p>This website is a manifestation of an interest I've acquired over the past couple of years in internet communities, creative spaces and human-centric technology. Yes, my landing back on the "small web" is not just a reaction to seeing what happen when someone like Moron acquires a social network like they did with Twitter...</p><p>Part of it is that I consider Roblox itself (my employer at the time of writing, in case you don't know) to be part of the more "humanistic" web, a social experience not driven by ads, algorithms and passive feeds, but by creativity, agency, active participation.</p><p>As part of this exploration, I wanted to go back and see what we had when posting online was not subject to an algorithm, was not driven to maximize engagement to be able to monetize ads and the like... I downloaded a few archives of old usenet postings (i.e. when newsgroups were still used for discussions, and not as they later devolved, exclusively as a way to distribute binary files of dubious legality) and wrote a small script to convert them to HTML.</p><p>The conversion process is far from... good. As far as I could tell, there is no encoding of the comment trees in usenet, it's just a linear stream of email-like messages as received by the server. </p><p>There does not even seem to be a standard for dates or... anything regarding the headers, so whilst I did write a parser that is robust enough to guess a date for each post in the archive, the date itself is not reliable, as I've seen a ton of different encodings, timezone formats and so on. </p><p>Even the post subject is not entirely reliable, because people change it, sometimes by mistake (misspelling, corrections, truncation), sometimes adding chains of "re:" or "was:" and so on, which again, I tried somewhat to account for, but succeeded only partially.</p><p>For each archive I converted only the top 1000 posts by number of replies, and no other filtering was done, so you will see the occasional spam, and a ton of less than politically correct stuff. Proceed at your peril, you have been warned.</p><p>And now without further ado, here are a few archives for your perusal.</p><p>01 [FILE:EXTERNAL/news/alt.philosophy/index_main.htm alt.philosophy]</p><p>02 [FILE:EXTERNAL/news/alt.postmodern/index_main.htm alt.postmodern]</p><p>03 [FILE:EXTERNAL/news/comp.ai.alife/index_main.htm comp.ai.alife]</p><p>04 [FILE:EXTERNAL/news/comp.ai.genetic/index_main.htm comp.ai.genetic]</p><p>05 [FILE:EXTERNAL/news/comp.ai.neural-nets/index_main.htm comp.ai.neural-nets]</p><p>06 [FILE:EXTERNAL/news/comp.ai.philosophy/index_main.htm comp.ai.philosophy]</p><p>07 [FILE:EXTERNAL/news/comp.arch/index_main.htm comp.arch]</p><p>08 [FILE:EXTERNAL/news/comp.compilers/index_main.htm comp.compilers]</p><p>09 [FILE:EXTERNAL/news/comp.games.development.industry/index_main.htm comp.development.industry]</p><p>10 [FILE:EXTERNAL/news/comp.games.development.programming.algorithms/index_main.htm comp.development.programming.algorithms]</p><p>11 [FILE:EXTERNAL/news/comp.graphics.algorithms/index_main.htm comp.graphics.algorithms]</p><p>12 [FILE:EXTERNAL/news/comp.jobs.computer/index_main.htm comp.jobs.computer]</p><p>13 [FILE:EXTERNAL/news/comp.lang.forth/index_main.htm comp.lang.forth]</p><p>14 [FILE:EXTERNAL/news/comp.lang.functional/index_main.htm comp.lang.functional]</p><p>15 [FILE:EXTERNAL/news/comp.lang.lisp/index_main.htm comp.lang.lisp]</p><p>16 [FILE:EXTERNAL/news/comp.org.eff.talk/index_main.htm comp.org.eff.talk]</p><p>17 [FILE:EXTERNAL/news/comp.society.futures/index_main.htm comp.society.futures]</p><p>18 [FILE:EXTERNAL/news/comp.software-eng/index_main.htm comp.software-eng]</p><p>19 [FILE:EXTERNAL/news/comp.sys.apple2/index_main.htm comp.sys.apple2]</p><p>20 [FILE:EXTERNAL/news/comp.sys.ibm.pc.demos/index_main.htm comp.sys.ibm.pc.demos]</p><p>Better times? Worse times?</p><div><br /></div>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-36937018863559825662023-09-07T17:27:00.009-07:002023-09-07T17:27:52.953-07:00Notes: Reversing Revopoint Scanner.<p><b style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;">Read the following article here: </b><a href="https://c0de517e.com/002_reversing_revopoint.htm">Notes: Reversing Revopoint Scanner. (c0de517e.com)</a> <b style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;">This blog is dead! Update your links (and RSS!) to c0de517e.com.</b></p><b style="background-color: white; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.2px;">Below you will find a draft version of the post, all images, formatting and links will be missing here as I moved to my new system.</b><div><span style="background-color: white; font-size: 13.2px;"><span style="font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif;"><div><br /></div><div>I have to admit, I bought my (...checking the settings...) iPhone 13 pro back in the day mostly because of its 3d scanning abilities, I wanted to have fun with acqusition of 3d scenes. It turns out that the lidar camera is not that strong, it's still good fun both for "serious" uses photogrammetry is better (RealityScan or the nerf-based Luma.ai)... but I digress...</div><div><br /></div><div>[IMG:sitescape.jpg SiteScape iOS app]</div><div><br /></div><div>[IMG:nerds.jpg No NERFs, only nerds.]</div><div><br /></div><div>Point is, I have been fascinated with 3d scanning for quite a while, so when [LINK:https://forum.revopoint3d.com/ revopoint] came out with a new kickstarter for its "range" scanner, I bit the bullet and got me one.</div><div>Unfortunately, as it often happens with new companies and products, albeit the hardware in the scanner is quite competent, the software side is still lacking. A fact that is often brought up in the support forums, the most annoying issue being its propensity to lose tracking of the object being scanned, and thus failing to align frames.</div><div><br /></div><div>[IMG:revoscan.png I assure you, there is no Toshiba Libretto with a keyboard that large...]</div><div><br /></div><div>This is especially infuriating as in theory one could run a more expensive alignment algorithm on the captured frames offline, but the software only works with realtime alignment, and it is not good enough to actually succeed at that.</div><div><br /></div><div>Well, this is where knowing a bit of (python) programming, a bit about 3d and a dash of numerical optimization can come to rescue.</div><div><br /></div><div>Luckily, revoscan saves a "cache" of raw frames in a trivial to load format. The output of the color camera is stored straight as images, while the depth camera is saved in ".dph" files - all being the same size: 500kb.</div><div><br /></div><div>Now... 640*400 is 256000... so it seems that the depth is saved in a raw 2-byte per pixel format, which indeed is the case. Depth appears to be encoded as a 16 bit integer, with actual range going in the frames I've dumped from circa 3000 to 7000, with zero signaling an invalid pixel.</div><div>This seems close enough to the spec sheet, which describes the scanner as able to go from 300 to 800mm with a 0.1mm precision. So far so good!</div><div><br /></div><div>[IMG:specs.png From the revopoint website.]</div><div><br /></div><div>I don't want to make this too long, but suffice to say that trying to guess the right projection entirely from the specs I saw, didn't work. In fact, it seems to me the measurements they give (picture above) do not really make for a straight furstum.</div><div><br /></div><div>[IMG:stretch.png Trying to do some math on pen an paper, from the specs - clearly wrong.]</div><div><br /></div><div>One idea could be to just scan a simple scene with the included software, either capturing just a single frame (turns out the easiest is to delete all other frames in the "cache" folder, then reopen the scan) or using the included tripod to get a static scan, then convert it to a point cloud with as minimal processing as possible, and try to deduce the projection from there.</div><div><br /></div><div>Well... that's exactly what I've done.</div><div><br /></div><div>[IMG:calibration.jpg Trying to create a scene with a good, smooth depth range and some nice details.]</div><div><br /></div><div>[IMG:revoscan2.jpg How it looks like in RevoScan.]</div><div><br /></div><div>Point clouds are a well known thing, so of course you can find packages to handle them. For this I chose to work with [LINK:http://www.open3d.org/ open3d] in Python/Jupyter (I use the Anaconda distribution), which is nowadays my go-to setup for lots of quick experiments. </div><div>Open3d provides a lot of functionality, but what I was interested on for this is that it has a simple interface to load and visutalize point clouds, to find alignment between two clouds and estimate the distance between clouds.</div><div><br /></div><div>Not, here is where a lot of elbow grease was wasted. It's trivial enough to write code to do numerical optimization for this problem, especially as open3d provides a fast enough distance metric that can be directly plugged in as an error term. The problem is to decide what parameters to optimize and how the model should look like. Do we assume everything is linear? Is there going to be any sort of lens distortion to compensate for? Do we allow for a translation term? A rotation term? How to best formulate all of these parameters in order to help the numerical optimization routine?</div><div><br /></div><div>I tried a bunch of different options, I went through using quaternions, I tried optimizing first with some rigid transform compentation by having open3d align the point clouds before computing the error, to isolate just the projection parameters, and then fixing the projection and optimizing for translation and rotation (as unfortunately I did not find a way to constrain open3d alignment to an orthogonal transform) and so on.</div><div><br /></div><div>At the beginning I was using differential evolution for a global search, followed by Nelder-Mead to refine the best candidate found, but I quickly moved to just doing NM for as a local optimizer and just "eyeballing" good starting parameters for a given model. I did restart NM by hand, by feeding it the best solution it found if the error seemed still large - this is a common trick as there is a phenomenon called "simplex collapse" that scipy does not seem to account for.</div><div><br /></div><div>In the end, I just gave up trying to be "smart" and optimized a 3x4 matrix... yielding this:</div><div><br /></div><div>[IMG:opt.png Eureka! Cyan is the RevoScan .ply exported cloud, Yellow is my own decoding of .dph files]</div><div><br /></div><div>In python:</div><div>[[[</div><div>opt_M = [0.,-1.,0.,0., -1.,0.,0.,0. ,0.,0.,-1.,0.] # Initial guess</div><div>opt_M = [ 0.00007,-5.20327,0.09691,0.0727 , -3.25187,-0.00033,0.97579,-0.02795, 0.00015,0.00075,-5.00007,0.01569]</div><div>#opt_M = [ 0.,-5.2,0.1,0. ,-3.25,0.,0.976,0., 0.,0.,-5.,0.]</div><div><br /></div><div>def img_to_world_M(ix,iy,d,P=opt_M): # Note: ix,iy are pixel coordinates (ix:0...400, iy:0...640), d = raw uint16 depth at that pixel location</div><div> d/=50. # could have avoided this but I didn't want to look at large numbers in the matrix</div><div> return np.matmul(np.array(P).reshape(3,4), np.array([(ix/400.0-0.5)*d,(iy/640.0-0.5)*d,d,1]))</div><div><br /></div><div>with open(dph_file_path, 'rb') as f:</div><div> depth_image = np.fromfile(f, dtype=np.uint16)</div><div> print(min(depth_image), max(depth_image), min(depth_image[depth_image != 0]))</div><div> depth_image = depth_image.reshape(400,640)</div><div><br /></div><div>subset = [(iy,ix) for iy,ix in np.ndindex(depth_image.shape) if depth_image[iy,ix]!=0]</div><div>points = [img_to_world_M(ix,iy,depth_image[iy,ix]) for iy, ix in subset]</div><div>]]]</div><div>Surprisingly... the correct matrix is not orthogonal! To be honest, I would not have imagined that, and this in the end is why all my other fancy attempts failed. I tried with a couple of different scenes, and the results were always the same, so this seems to be the correct function to use.</div><div><br /></div><div>Now, armed with this, I can write my own offline alignment system, or hack the scanner to produce for example and animated point cloud! Fun!</div><div><br /></div><div>[offline_align.png Several frames aligned offline.]</div><div><br /></div><div>**Appendix**</div><div><br /></div><div>- In RevoScan 5, the settings that seemed the best are: "accurate" scanning mode, set the range to the maximum 300 to 1200, fuse the point cloud with the "standard" algorithm set at the minimum distance of 0.1. This still does not produce, even for a single frame, the same exact points as decoding the .dph with my method, as RevoScan seems always to drop/average some points.</div><div><br /></div><div>- The minimum and maximum scanning distance seem to be mostly limited by the IR illumiation, more than parallax? Too far, the IR won't reach, too near, it seems to saturate the depth cameras. This would explain also why the scanner does better with objects with a simple, diffuse, white albedo, and why it won't work as well in the sun.</div><div><br /></div><div>[IMG:sls.jpg This is probably about ten years old now, around the time Alex Evans (see https://openprocessing.org/sketch/1995/) was toying with structured light scanning, I was doing the same. Sadly, the hard drives with these scans broke and I lost all this :/]</div></span></span></div>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-26399590433469166632023-09-03T23:31:00.001-07:002023-09-03T23:31:10.372-07:00How does this work? txt2web.py<br /><b>Read the following article here: <a href="https://c0de517e.com/001_txt2web.htm">https://c0de517e.com/001_txt2web.htm</a> This blog is dead! Update your links (and RSS!) to c0de517e.com. <br /><br />Below you will find a draft version of the post, all images, formatting and links will be missing here as I moved to my new system.<br /></b><div><div><br /></div><div>(tl;dr: badly)</div><div><br /></div><div>The common wisdom when starting a personal website nowadays is to go for a static generator. [LINK:https://gohugo.io/ Hugo] seems particularly popular and touted as a simple, fast, no-brainer solution.</div><div><br /></div><div>OMG! If that's what simplicity looks like these days, we are really off the deep end. Now, I don't want to badmouth what is likely an amazing feat of engineering, I don't know enough about anything to say that... But, tradeoffs, right? Let's not just adopt some tech stack because it's "so hot right now". Right? [LINK:http://c0de517e.blogspot.com/2016/10/over-engineering-root-of-all-evil.html Overengineering is the root of all evil].</div><div><br /></div><div>[IMG:hot.png]</div><div><br /></div><div>I had to interact for the first time with hugo for REAC2023, as I was trying to style a bit more our homepage with the graphic design I made this year, and that was enough to persuade me it's not made for my use-cases. I can imagine that if you are running a bigger shop, a "serious" website, handled by professionals, perhaps it makes sense? But for personal use I felt, quite literally, I could be more efficient using raw HTML. And I don't know HTML, at all!</div><div><br /></div><div>Indeed in most cases for a blog like this, [LINK:https://fabiensanglard.net/html/index.html raw HTML is all you need] (exhibit [LINK:https://motherfuckingwebsite.com/ B]). But I'm a programmer, first and foremost, and thus trained to waste time in futile efforts if they promise vague efficiency improvements "down the line" (perhaps, in the next life).</div><div><br /></div><div>Bikeshedding, what can go wrong? In all seriousness though, this is a hobby, and so, everything goes. Plus, I love Python, but I don't know much about it (that's probably why I still love it), so more exercise can only help.</div><div><br /></div><div>From the get go, I had a few requirements. Or anti-requirements, really:</div><div><br /></div><div>1) I don't want to build a site generator, i.e. my own version of hugo et al. I'll write some code that generates the website, but the code and the website are one and the same, everything hardcoded/ad-hoc for it.</div><div>2) I don't want to write "much" code. Ideally I aim at fewer lines in total than the average Hugo configuration/template script.</div><div>3) I don't want to use markdown. Markdown is great, everyone loves it, but it's already overengineering for me. I just need plain text, plus the ability to put links and images.</div><div>4) I don't want to spin a webserver just to preview a dumb static website! Why that's a requirement is puzzling to me.</div><div>5) I want to be able to easily work on my articles anywhere, without having to install anything.</div><div>6) No javascript required. Might add some JS in the future for fun stuff, but the website will always work without.</div><div><br /></div><div>This is actually how I used to write my blog anyways. Most of my posts are textfiles, I don't write in the horrible blogspot editor my drafts, that would be insane. The textfiles are littered with informal "tags" (e.g. "TODO" or "add IMAGE here" etc) that I can search and replace when publishing. So why not just formalize that!</div><div><br /></div><div>That's about it. "txt2web" is a python script that scans a folder for .txt files, and convert them mechanically to HTML, mostly dealing with adding "br" tags and "nbsp". It prepends a small CSS inline file to them for "styling", and it understands how to make links, add images... and nothing else! Oh, yeah, I can **bold** text too, this is another thing I actually use in my writing.</div><div><br /></div><div>Then it generates an index file, which is mostly the same flow converting an "index.txt" to web, but appending at the end a list of links to all other pages it found. And because I felt extra-fancy, I also record modification dates, so I can put them next to posts.</div><div><br /></div><div>Yet, in its simplicity it has a few features that are important to me, and I could not find in "off the shelf" website builders. As of "v0.1":</div><div><br /></div><div>- It checks links for validity, so I can know if a link expired. Maybe one day I could automatically link via Internet Archive, but I don't know if that's even wise (might confuse google or something?).</div><div>- It parses image size so the page does not need to reflow on load. Maybe one day I'll generate thumbnails as well. In general, the pages it generates are the fastest thing you'll ever see on the web.</div><div>- It reminds me of leftover "TODO"s in the page.</div><div>- The 10-liner CSS I added should correctly support day/night modes, and it should be mobile-friendly.</div><div>- It generates a good old RSS feed! I personally use Feedly/Reeder (iOS app) daily, after google killed its reader product.</div><div><br /></div><div>If you want to check out the code (beware, it's horrible, I always forget how to write good "pythonic" code as I use it rarely), you'll find it [FILE:txt2web.py here.]</div><div><br /></div><div>Also, for each .htm there should be on the server the source .txt, as I upload everything (the source and the "production" website are one and the same). For example [FILE:001_txt2web.txt]!</div><div><br /></div><div>Enjoy!</div><div><br /></div><div>**Appendix:**</div><div><br /></div><div>What about gopher/the tildeverse/smol-net/permacomputing?</div><div>I like the idea. A lot. I believe there is more value to the individuals in being in smaller communities than in "megascale" ones. I believe that there is more value in content that is harder to digest than in the current "junkfood for the brain" homogenized crap we are currently serving.</div><div><br /></div><div>I suspect Twitter and TikTok "won" because they are exploiting evolutionary biases - which make sense and we have to accept, but that do not necessarily serve us the best anymore. And I suspect that the most value of world-scale anything is extracted by celebrities and advertisers, to have a platform with a wide reach, not by most of the people on the platform.</div><div><br /></div><div>But, needless to say, this is bigger topic for another time! BTW, if you don't know what I'm talking about, let me save you some google: [LINK:https://tildeverse.org/], [LINK:https://communitywiki.org/static/SmolNet.html], [LINK:https://100r.co/site/uxn.html]</div><div><br /></div><div>What's relevant to this post is that yes, the fact I have control over the website and I chose a minimalistic, text-based format, would allow me to output to other representations as well... Maybe one day I'll have a gopher page for work-in-progress stuff, for few people who care to lurk those kind of things.</div><div><br /></div><div>[IMG:libretto.jpg Achievement unlocked?]</div><div><br /></div><div>[IMG:cafe.jpg Hipster coffee, hipster writing.]</div></div>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-47330045127876360692023-08-31T15:57:00.003-07:002023-08-31T15:58:34.270-07:00 A new Blog: Reinventing the wheel.<p><b>Read the following article here: <a href="https://c0de517e.com/000_reinventing_the_wheel.htm">A new Blog: Reinventing the wheel. (c0de517e.com)</a> This blog is dead! Update your links (and RSS!) to c0de517e.com. </b></p><p><b>Below you will find a draft version of the post, all images, formatting and links will be missing here as I moved to my new system.</b></p><p> A new Blog: Reinventing the wheel.</p><p>I made my first website in high school, must not have been long after I discovered the internet and signed a contract with the first provider of my town. Remember Microsoft Frontpage and GeoCities? Photoshop web export? That!</p><p>It was nothing much, the kind of things that later would find home on MySpace: music, friends, some drawings and 3d art I was making at the time, demoscene, a bit of photography, animated gifs of course. I think later on even had some java effects on it. All in all, teenager stuff.</p><p>Realizing that nobody in the world would care about my crappy art page, it was not long lived, in fact I don't even think I saved a copy in my archives. But it introduced me to this idea of the web and using it for personal spaces.</p><p>So, soon after I started another web project, this time focusing on mainstream subjects such as a teenager's view of philosophy, politics, fountain pens and lisp... This time, it was going to using cutting-edge, newfangled technology. It was going to be a blog! </p><p>[IMG:lemon.png Celebrity! Somehow a "famous" lisp website noticed me back in the days...]</p><p>[NOTE: do not link... but it's still up -> http://kenpex.blogspot.com]</p><p>And yes, that was on blogspot, where my main blog is/used to be until today!</p><p>It was truly exciting, even if in retrospect, dumb. See, the idea of keeping an online journal and sharing it is great. What's not to like. Writing - great. Journaling - great. Sharing - great. Even if you don't get any visitors, just the feeling of being part of a community, and a cutting-edge one at that, exploring the cyberspace, joining webrings... Why not?</p><p>Dumb... because, well, I already knew how to write websites, and blogspot offered... nothing. The value is zero, and it has always been zero. It had and has a crappy editor - and we already had frontpage and geocities, you didn't need to know HTML. It was not a social network. Even basic stuff like visitor count and so on had to be brought from external providers. </p><p>True, it does allow for comments, and back in the days these were a bit better, but they were never great - today they are only spam. We felt good using it, even if it was truly never good. And... we pay a price, a quite high price at that.</p><p>We got nothing, nothing of value anyways. And in return we locked ourselves in a platform - one that happens to be dying nowadays, but in general, we gave our creativity for free to an entity that gave us nothing in return.</p><p>Big whoop you say! This is the deal of the modern internet, didn't you hear? "If you are not paying for it, you're not the customer - you are the product being sold". Yes, yes, I'm not that naive. There is a nuance - at least for me. The trade is not per-se bad. But it is a trade, and you have to understand how much value you are getting.</p><p>This is true for everything, really, in tech, perhaps in life. Tradeoffs. I made a few bad deals, and it's time to rectify them. Blogspot has no value. I even used to host my presentations and files on Scribd for it - and boy was that a mistake. </p><p>We should talk about Twitter and similar communities as well... But that will be for another time...</p><p>I abandoned my first blog when I started working professionally in gaming. I didn't want to have my real name associated with it as I was navigating my first jobs, and I didn't want to have to discuss with my employer the nuances of what's good to post or not on a personal, but technical blog. </p><p>Eventually the blog became "famous" enough that people knew it was me behind it, so I dropped the pretense of anonymity - but that came many years after its inception.</p><p>And here we are now. So, this is going to be my new homepage. I hope you enjoy it! It has many features Blogspot never supported, both for you as a viewer and certainly for me as a writer.</p><p>You can understand why I went with my own website instead of simply moving to the next "great for now" platform. I've looked around a bit, and found nothing that provided any value to me. </p><p>Medium is about the same as Blogspot. CoHost - I don't need to tangle my writing with the social media I use to advertise and discuss about it. Substack? I don't care about getting paid... Github pages? Why on earth?</p><p>I just want a place to share random crap.</p><p>The old blog will stay up and for a while I plan to cross-post on both. Currently, I have no plans to take the old blog down, but I have scraped its contents in a few different ways "just in case".</p><p>- Angelo Pesce, a.k.a. deadc0de on c0de517e, a.k.a. "kenpex"</p><p>**Appendix:**</p><p>[IMG:1stweb.png Quirky, unprofessional web "design", wasn't life more fun when we were not using all the same cookie molds?]</p><p>[IMG:1stweb_2.png Yeah, the entrace featured my first car, cruising on the Salerno coast hightway, with bad Photoshop effects!]</p><p>[IMG:engblog.png Teenage problems on display. And lisp.]</p><p>[IMG:itblog.png Even more personal, even more random, and of course, more bad Photoshop!]</p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-78471862916936660182023-04-12T17:33:00.006-07:002023-04-18T15:19:58.905-07:00Half baked and a half: A small update.<span style="font-family: arial;">Previously: <a href="http://c0de517e.blogspot.com/2023/03/half-baked-dynamic-occlusion-culling.html">C0DE517E: Half baked: Dynamic Occlusion Culling</a><br /><br />Trying the idea of using the (incrementally accumulated) voxel data to augment the reprojection of the previous depth buffer.</span><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Actually, I use here a depth from five frames ago (storing them in a ring buffer) - to simulate the (really worst-case) delay we would expect from CPU readbacks.</span><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Scene and the final occlusion buffer (quarter res):</span></div><div><span style="font-family: arial;"><br /></span><div style="text-align: center;"><span style="font-family: arial;"><img height="232" src="https://lh5.googleusercontent.com/BYNnJ6HcLYpYSEXNNzR7YAzll2apxicq1VcCOCgZsPLVEhVgnkS50CAXjipM587LRigpF2KaEjXoLogQyFjODgHQxskdNyUND8dhCDNnIxtxup3WmhA16_RWhhOtHcZEYu0lYGjLBwMcrOXdzttrzt4=w400-h232" width="400" /></span></div><span style="font-family: arial;"><div><br /></div></span><span style="font-family: arial;">Here is the occlusion buffer, generated with different techniques. Top: without median, Bottom: with. Left to right: depth reprojection only, voxel only, both. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Note that the camera was undergoing fast rotation, you can see that the reprojected depth has a large area along the bottom and left edges where there is no information.</span></div><div><span style="font-family: arial;"><br /></span><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEg9Jw-Azf9kwH99BXzJxknfu7OTFqLRMfa-X9vCqub6EWhwLenzooY-AR3uuhg1_1A-WOTBCnnXjvvapVuzkCVX8LKarK2JvCphe-VZnXbWOfz1WQ8g4haZAIc0z-VArUJ8kCjCLBS-9YDhyE5vtRCMlhjWeAEe9D30bFf4UwD83eJJ8ofBTsWx33qROg"><span style="font-family: arial;"><img height="154" src="https://blogger.googleusercontent.com/img/a/AVvXsEg9Jw-Azf9kwH99BXzJxknfu7OTFqLRMfa-X9vCqub6EWhwLenzooY-AR3uuhg1_1A-WOTBCnnXjvvapVuzkCVX8LKarK2JvCphe-VZnXbWOfz1WQ8g4haZAIc0z-VArUJ8kCjCLBS-9YDhyE5vtRCMlhjWeAEe9D30bFf4UwD83eJJ8ofBTsWx33qROg=w400-h154" width="400" /></span></a></div><span style="font-family: arial;"><br />Debug views: accumulated voxel data. 256x256x128 (8mb) 8bit voxels, each voxel stores a 2x2x2 binary sub-voxel. </span></div><div><span style="font-family: arial;"><br /></span></div><div><div style="text-align: center;"><span style="font-family: arial;"><img height="173" src="https://lh4.googleusercontent.com/-hoaDTpByJovNpY7f61dOfuClALcBsIpbYUCqVs8hDp5MI2ctIRFBToio4oC4TOyB-b_r2X4ckrlMk_ewqSvbX7RcKHVibs85ttAXEyY2QtgEKCX_UV5dWgkS2uzulW81AWcfvG2l8umiVoAmWH5utc=w400-h173" width="400" /></span></div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The sub-voxels are rendered only "up close", they are a simple LOD scheme. In practice, we can LOD more, render (splat) only up close and only in areas where the depth reprojection has holes.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Note that my voxel renderer (point splatter) right now is just a brute-force compute shader that iterates over the entire 3d texture (doesn't even try to frustum cull). </span></div><div><span style="font-family: arial;">Of course that's bad, but it's not useful for me to improve performance, only to test LOD ideas, memory requirements and so on, as the real implementation would need to be on the CPU anyways.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Let's go step by step now, to further illustrate the idea thus far.</span></div><div><span style="text-align: center;"><span style="font-family: arial;"><br /></span></span></div><div><span style="text-align: center;"><span style="font-family: arial;">Naive Z reprojection (bottom left) and the ring buffer of five quarter-res depth buffers:</span></span></div><div><span style="text-align: center;"><span style="font-family: arial;"><br /></span></span></div><div><span style="text-align: center;"><span style="font-family: arial;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjE7A350MMHsNH_mUIiW8gQ40IQAmHprmIoOgD5r3pLNFkZU8dSBcXsn_CSgKStsIqlFEMz8ltwh1rEBhIios0UqcmugGkscdx1gAufb7F_Ces8wjetVNhAixZp3F6kdDuzd4ITWTX5mN-iLUC4qHojONvB_jISBOi702ZKfqcdu6XyE9XWYNsTtT31PQ" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="540" data-original-width="960" height="180" src="https://blogger.googleusercontent.com/img/a/AVvXsEjE7A350MMHsNH_mUIiW8gQ40IQAmHprmIoOgD5r3pLNFkZU8dSBcXsn_CSgKStsIqlFEMz8ltwh1rEBhIios0UqcmugGkscdx1gAufb7F_Ces8wjetVNhAixZp3F6kdDuzd4ITWTX5mN-iLUC4qHojONvB_jISBOi702ZKfqcdu6XyE9XWYNsTtT31PQ" width="320" /></a></div><br /></span></span></div><div><div><span style="font-family: arial;">Note the three main issues with the depth reprojection:</span></div><div><ol style="text-align: left;"><li><span style="font-family: arial;">It cannot cover the entire frame, there is a gap (in this case on the bottom left) where we had no data due to camera movement/rotation.</span></li><li><span style="font-family: arial;">The point reprojection undersampled in the areas of the frame that get "stretched" - creating small holes (look around the right edge of the image). This is the primary job of the median filter to fix, albeit I suspect that this step can be fast enough that we could also supersample a bit (say, reproject a half-res depth into the quarter res buffer...)</span></li><li><span style="font-family: arial;">Disocclusion "holes" (see around the poles on the left half of the frame)</span></li></ol><div><span style="font-family: arial;">After the median filter (2x magnification). On the left, a debug image showing the absolute error compared to the real (end of frame) z-buffer. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The error scale goes from yellow (negative error - false occlusion) to black (no error) to cyan (positive error - false disocclusion. Also, there is a faint yellow dot pattern marking the areas that were not written at all by the reprojection.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Note how all the error right now it "positive" - which is good:</span></div></div><div><br /></div><div><i><span style="font-family: arial;"><span><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEguxh52m7477U0aeH1_pRdKN0j3JFNpp0Gm39vaVuZMzSD5NVsly43zptppCOzd2hlkons32e4sFzGaibH8w8pXbdlPGJQa3bPyfRmeJ1Bgdpr8y4kY0eExrYvjt_zJRtNgLgBxD3_CUAyTsXV0IdFh69ASx56fbxGd8mLJLwJQXXmXTF_qT0iSBuhm1Q" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="269" data-original-width="957" height="90" src="https://blogger.googleusercontent.com/img/a/AVvXsEguxh52m7477U0aeH1_pRdKN0j3JFNpp0Gm39vaVuZMzSD5NVsly43zptppCOzd2hlkons32e4sFzGaibH8w8pXbdlPGJQa3bPyfRmeJ1Bgdpr8y4kY0eExrYvjt_zJRtNgLgBxD3_CUAyTsXV0IdFh69ASx56fbxGd8mLJLwJQXXmXTF_qT0iSBuhm1Q" width="320" /></a></div><br /></div></span></span></i><div><span style="font-family: arial;">My current hole-filling median algorithm does not fix all the small reprojection gaps, it could be more aggressive, but in practice right now it didn't seem to be a problem.</span></div></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Now let's start adding in the voxel point splats:</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEi2EkwuGWrj4wB47IXnL9SqVujSICxyrQicPEFfv9bYsarGRX9GdmVy7eRoSYrfgggdMNBLOK2r7dd_LN7KykqRSp_O-cItUU-zAsxhFKvjh8zKvje0bfwcWN0q6fkcqRwOyYiqnM1AzKOyU6ziduGL8IL0CVGRqbBHSwc8M32JPalZ4HKpLDDTyweFjw" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="270" data-original-width="958" height="90" src="https://blogger.googleusercontent.com/img/a/AVvXsEi2EkwuGWrj4wB47IXnL9SqVujSICxyrQicPEFfv9bYsarGRX9GdmVy7eRoSYrfgggdMNBLOK2r7dd_LN7KykqRSp_O-cItUU-zAsxhFKvjh8zKvje0bfwcWN0q6fkcqRwOyYiqnM1AzKOyU6ziduGL8IL0CVGRqbBHSwc8M32JPalZ4HKpLDDTyweFjw" width="320" /></a></div><br />And finally, only in the areas that still are "empty" from either pass, we do a further dilation (this time, a larger filter, starting from 3x3 but going up to 5x5, taking the farthest sample)</span></div><div><i><span style="font-family: arial;"><span><br /></span></span></i></div><div><i><span style="font-family: arial;"><span><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgQlFmOoAQiLYkcjDpy-4xd2u9ysDFU757bg0jKIA4t1jyh40fBERTQ-30uwk2wZtAlOY_Imhj5vvTnmmuz_a0Rw2WzR4DJsM2JrQO8QYxdvz4t6JMQ_q0IRWy4RWk0kzFs-lcB27xSBl32Ixn5QXvrYRrfmaoFc18LcwDygsbej9uSrWH6HTN9PkyJ0A" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="267" data-original-width="961" height="89" src="https://blogger.googleusercontent.com/img/a/AVvXsEgQlFmOoAQiLYkcjDpy-4xd2u9ysDFU757bg0jKIA4t1jyh40fBERTQ-30uwk2wZtAlOY_Imhj5vvTnmmuz_a0Rw2WzR4DJsM2JrQO8QYxdvz4t6JMQ_q0IRWy4RWk0kzFs-lcB27xSBl32Ixn5QXvrYRrfmaoFc18LcwDygsbej9uSrWH6HTN9PkyJ0A" width="320" /></a></div><br /></span></span></i></div><div><span style="font-family: arial;"><span>We get the entire frame reconstructed, with an error that is surprisingly decent.</span></span></div><div><i><span style="font-family: arial;"><span><br /></span></span></i></div><div><i><span style="font-family: arial;"><span>A</span> cute trick: it's cheap to use the subvoxel data, when we don't render the 2x2x2, to bias the position of the voxel point. Just a simple lookup[256] to a float3 with the average position of the corresponding full subvoxels for that given encoded byte.</span></i></div><div><span style="font-family: arial;"><i><br /></i></span></div><div><span style="font-family: arial;"><i>This reasoning could be extended to "supervoxels", 64 bits could and should (data should be in Morton order, which would result in an implicit, full octree) encode 2x2x2 8 bit voxels... then far away we could splat only one point per 64bit supervoxels, and position it with the same bias logic (create an 8bit mask from the 64bits, then use the lookup).</i></span></div></div></div><div><span style="font-family: arial;"><i><br /></i></span></div><div><span style="font-family: arial;"><i><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEj4paoVIaf2qjrjsBe1ke2pOtiP2UUbZfwn5WES9tIEVNFJisUAO59_dJBD7P2Ul9jfZuA_dAZMWXcZ2RH_2QVLG_KLKC6HjRmDMyLiJDtqV01pHNekT81wrPssYOTx3qxl5amKjpwvLKfmtRr1Www5Sg1C_zM0jVHV5VN-3gRV5Xri8nuAeJKHbIRWLQ" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="547" data-original-width="826" height="212" src="https://blogger.googleusercontent.com/img/a/AVvXsEj4paoVIaf2qjrjsBe1ke2pOtiP2UUbZfwn5WES9tIEVNFJisUAO59_dJBD7P2Ul9jfZuA_dAZMWXcZ2RH_2QVLG_KLKC6HjRmDMyLiJDtqV01pHNekT81wrPssYOTx3qxl5amKjpwvLKfmtRr1Www5Sg1C_zM0jVHV5VN-3gRV5Xri8nuAeJKHbIRWLQ" width="320" /></a></div><br /><br /></i></span></div><div><span style="font-family: arial;"><br /></span></div>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-46318983121187666672023-04-10T18:00:00.003-07:002023-04-10T18:00:58.501-07:00From the archive: Notes on GGX parallax correction.<div class="separator"></div><div style="text-align: justify;"><span style="font-family: arial;"><i>As for all my "series" - this might very well the first and last post about it, we'll see. I have a reasonable trove of solutions on my hard-drive that were either shipped, but never published, not even shipped or were, shipped, "published" but with minimal details, as a side note of bigger presentations. Wouldn't it be a shame if they spoiled?</i></span></div><span style="font-family: arial;"><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><div>Warning! All of what I'm going to talk about next probably is not very meaningful if you haven't been implementing parallax-corrected cubemaps before (or rather, recently), but if you did, it will (hopefully) all make sense.</div><div>This is not going to be a gentle introduction to the topic, just a dump of some notes...</div><div><br /></div></div><div style="text-align: justify;">Preconvoluted specular cubemaps come with all kinds of errors, but in the past decade or so we invented a better technique, where we improve the spatial locality of the cubemap by using a proxy geometry and raycasting. </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Typically the proxy geometry is rectangular, and the technique is known as <a href="https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/">parallax-corrected specular cubemaps</a>. This better technique comes with even more errors built-in, I did a summary of all of the <a href="http://c0de517e.blogspot.ca/2015/03/being-more-wrong-parallax-corrected.html">problems here, back in 2015</a>.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhiE2Z-ARn26D5tGCGduonozGL2wVlIF766Yso157kTtwU6Od0378w4VOMSDni3w2r-tgUK4eDvIMTp_XSC2lSoQ8nv_I7SflpS-wcPuceWHffEB_d0KF5k8OgYzhJ_9p7au1gVOwEeqjoTscSengZTVDubs4bK3WdciSUVK_uqJB-2N7dw0cGpeBRomg" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="503" data-original-width="1007" height="160" src="https://blogger.googleusercontent.com/img/a/AVvXsEhiE2Z-ARn26D5tGCGduonozGL2wVlIF766Yso157kTtwU6Od0378w4VOMSDni3w2r-tgUK4eDvIMTp_XSC2lSoQ8nv_I7SflpS-wcPuceWHffEB_d0KF5k8OgYzhJ_9p7au1gVOwEeqjoTscSengZTVDubs4bK3WdciSUVK_uqJB-2N7dw0cGpeBRomg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">From Seb. Lagarde (link above)</td></tr></tbody></table><br /></div><div style="text-align: justify;">The following is an attempt to solve one of the defects parallax correction introduces, by retrofitting some math I did for area lights to see if we can come up with a good solution.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Setup is the following: We have a cubemap specular reflection probe somewhere, and we want to use that to get the specular from a location different from the cube center. </div><div style="text-align: justify;">In order to do so, we trace a reflection ray from the surface to be shaded to the scene geometry, represented via some proxies that are easy to intersect, then we look the reflection baked in the probe towards the intersection point.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The problem with this setup is illustrated below. If you think of the specular lobe as projecting its intensity on the surfaces of the scene, you get a given footprint, which will be in general discontinuous (due to visibility) and stretched.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Think of our specular lobe like shining light from a torch on a surface.</div><div style="text-align: justify;"><br /></div><div style="text-align: center;"><img height="180" src="https://media.wired.com/photos/5a8391eeab6b9732a8555cae/16:9/w_2400,h_1350,c_limit/flashlight-595110980.jpg" width="320" /></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Clearly, when we baked the cubemap, we were moving the torch in a given way, from the cubemap center all around. When we query though, we are looking for the lobe that a torch would create on the scene from the shaded point, towards the reflection direction (or well, technically not as a BRDF is not a lobe around the mirror reflection direction but you know that with preconvolved cubemaps we always approximate with "Phong"-like lobes).</div><div style="text-align: justify;"><br /></div><div style="text-align: center;"><img height="240" src="https://lh6.googleusercontent.com/VnNth9WqPQP6VnQEOL--3UoSNJfUrZad8acLVTdcGjU-obCZ-9bh21PkkvP-8kYRHxWcBqN_rnM08j2DvScMThOQu_kOgGQtwhJA_w9bOoTObbw1tOINQ6W0VzuHqpBISObpNgkErhCAnhn93j1mXqQ=w320-h240" width="320" /></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">By using the cubemap information, we get a given projected kernel which in general doesn't match -at all- the kernel that our specular lobe on the surface projects.</div><div style="text-align: justify;">There is no guarantee that they are even closely related, because they can be at different distances, at different angles and "looking" at different scene surfaces (due to discontinuities).</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Now, geometry is the worst offender here. </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Even if the parallax proxy geometry is not the real scene, and we use proxies that are convex (boxes, k-dops...), naively intersecting planes to get a "corrected" reflection lookup clearly shows in shading at higher roughness, due to discontinuities in the derivatives.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEj5IHKMMJ-HbNMblwa_YFD_wQcqoPA4Hpch2gQTJljiGHe6B7qzoywnElkpNJBHh_G7r2QMHhW_549NGlbnUVOZWEkugdYfjwDe4hQqbxyYTczuNeX1CHo0J--JCE2_jh6CkPvglZCADyxVpVnG7EZ4KZaeQKQgr777mxqG6863h7cDYFwIhWrBVVQ5zA" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="754" data-original-width="1320" height="183" src="https://blogger.googleusercontent.com/img/a/AVvXsEj5IHKMMJ-HbNMblwa_YFD_wQcqoPA4Hpch2gQTJljiGHe6B7qzoywnElkpNJBHh_G7r2QMHhW_549NGlbnUVOZWEkugdYfjwDe4hQqbxyYTczuNeX1CHo0J--JCE2_jh6CkPvglZCADyxVpVnG7EZ4KZaeQKQgr777mxqG6863h7cDYFwIhWrBVVQ5zA" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">From <a href="https://www.youtube.com/watch?v=_GTXBS0eWN4">youtube</a> - note how the reflected corners of the room appear sharp, are not correctly blurred by the rough floor material.</td></tr></tbody></table></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The proxy geometry becomes "visible" in the reflection: as the ray changes plane, it changes the ratio of correction, and the plane discontinuity becomes obvious in the final image. </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">This is why in practice intersecting boxes is not great, and you'd have to find some smoother proxy geometry or "fade" out the parallax correction at high roughness. To my knowledge, everyone (??) does this "by eye", I'm not aware of a scientific approach, motivated in approximations and errors.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Honestly today I cannot recall what ended up shipping at the time, I think we initially had the idea of "fading" the parallax correction, then I added a weighting scheme to "blend" the intersection (ray parameter) between planes, and I also "pushed away" the parallax planes if we are too near them.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">In theory you could intersect something like a rounded box primitive, control the rounding with the roughness parameter, and reason about Jacobians (derivatives, continuity of the resulting filtering kernel, distortion...) but that sounds expensive and harder to generalize to k-dops.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The second worst "offender" with parallax correction is the difference in shape of the specular lobes, the precomputed one versus the "ideal" one we want to reconstruct, that happens even when both are projected on the same plane (i.e. in absence of visibility discontinuities).</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The simplest correction to make is in the case where the two lobes are both perpendicular to a surface, the only difference being the distance to it.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">This is relatively easy as increasing the distance looks close enough to increasing the roughness. Not exactly the same, but close enough to fit a simple correction formula that tweaks the roughness we fetch from the cubemap based on the ratio between the cubemap-to-intersection distance and the surface-to-intersection one:</div><div style="text-align: justify;"><br /></div><div style="text-align: center;"><img height="272" src="https://lh4.googleusercontent.com/hrFjoCZRxJSWrTjPpFjeXox8nvOndW2YqDNF5PAqanoim5dBY15p775yhhuiO7_lMZX0hVZfxw3H8GW0QHLe1UYqijYJuL5qEGts30RtciBfoERBoUTWyd8U70JxTK7K_V57_Nmr58evWofT-wc3KdQ=w320-h272" width="320" /></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">From this observation we know we can use numerical fitting and precomputation to find a correction factor from one model to another. </div><div style="text-align: justify;">Then, we can take that fitted data and either using a lookup for the conversion or we can find an analytic function that approximates it.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">This methodology is what I described at <a href="http://c0de517e.blogspot.com/2016/07/siggraph-2015-notes-for-approximate.html">Siggraph 2015</a> and have used many times since. Formulate an hypothesis: this can be approximated with that. Use brute force to optimize free parameters. Visualize the fitting and end results versus ground truth to understand if the process worked or if not, why not (where are the errors). Rinse and repeat.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Here you can see the first step. For every roughness (alpha) and distance, I fit a GGX D lobe with a new alpha', here adding a multiplicative scaling factor and an additive offset (subtractive, really, as the fitting will show).</div><div style="text-align: justify;"><br /></div><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEi7pr0_jRpQayvm0bxXvjTjog25RpUmFhkBC4kRP2e9_L9MwvArY57v_CeXFWXrso1hOo2cnDjoXjVfArmHxEvbRMXxFTfZ64lhu59kQkeV1KK01BYREIO2TAn4ALxyu-HnYymEUB_g0tz7oZtUSlgKsd2__B1qqVftjwrv5byXKMzLpHJSeHp_U1F6Ow"><img height="210" src="https://blogger.googleusercontent.com/img/a/AVvXsEi7pr0_jRpQayvm0bxXvjTjog25RpUmFhkBC4kRP2e9_L9MwvArY57v_CeXFWXrso1hOo2cnDjoXjVfArmHxEvbRMXxFTfZ64lhu59kQkeV1KK01BYREIO2TAn4ALxyu-HnYymEUB_g0tz7oZtUSlgKsd2__B1qqVftjwrv5byXKMzLpHJSeHp_U1F6Ow=w320-h210" width="320" /></a></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Why we use an additive offset? Well, it helps with the fitting, and it should be clear why, if we look at the previous grid. GGX at high roughness has long tail that turns "omnidirectional", whilst a low roughness lobe that is shining far away from a plane does not exhibit that omnidirectional factor.</div></span><span style="font-family: arial;"><div style="text-align: justify;"><br /></div><div style="text-align: center;"><img height="92" src="https://lh4.googleusercontent.com/Q_j9NumHZd80s_-H1lxZiYU5kwE79RgHspcNSM3dlMA7qoIzZjc1c3zDoQV_ARu-G5YsnkD7BVI3G4lI9G4exMzzcEuCzwfF0wJ4ANycI8ixceOs-nHSteLpWLZkVDzCE-nhsyBxjJEcoVwFw8sTMsw=w320-h92" width="320" /></div><div style="text-align: center;"><img height="129" src="https://lh3.googleusercontent.com/bIpz6JFZriU3N05pBs4p6T7CSNZDZClM4in9ZOLN7bx55xmQC4e1z50ih4rQnNy8GNTAOlwJKs9EVctRTRrsQZvutDolNY_J6jSsmYGm3KpYqAQW7cKiX2-haFLkzZpVHcVMylkjgRwTDuyJ--OUwCo=w320-h129" width="320" /></div><div style="text-align: center;"><img height="119" src="https://lh4.googleusercontent.com/gwXKGXm5hndQprpNatFTycYpjcbp8DiVwkUrhrBsHjv3Sf96QKKH0-kltwHMA7aWhH4j-PKHnDCSRapefg3yWQbvVt_FwE8n3a8KXMiiJVlZilVv-fT6DzminyKzztlUWFLrdBopuDIDrf85e2VMRxU=w320-h119" width="320" /></div></span><div style="text-align: justify;"><span style="font-family: arial;"><br /></span></div><div style="text-align: justify;"><span style="font-family: arial;">We cannot use it though, we employ only to help the fitting process find a good match. Why? Well, first, because we can't express it with a single fetch in a preconvolved cubemap mip hierarchy (we can only change the preconvolved lobe by a multiplicative factor), but also note that it is non-zero only in the area where the roughness maxes out (we cannot get rougher than alpha=1), and in that area there is nothing really that we can do.</span></div><span style="font-family: arial;"><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Of course, next we'd want to find an analytic approximation, but also make sure everything is done in whatever exact association there is from cubemap mip level to alpha, ending up with a function that goes from GGX mip selection to adjusted GGX mip selection (given the distance). </div><div style="text-align: justify;">This is really engine-dependent, and left as an exercise to the reader (in all honesty, I don't even have the final formulas/code anymore)</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Next up is to consider the case where the cubemap and the surface are not perpendicular to the intersection plane (even keeping that to be just a plane, so again, no discontinuities). Can we account for that as well?</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">To illustrate the problem, the following <span style="font-family: Arial; white-space: pre-wrap;">shows the absolute value of the cosine of the angle of the intersection between the reflection direction and the proxy planes in a scene.</span></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEizfBhCa3pwijS5EKHOYNrTS16pwubP2XFfMVkg5XDyW97bomQZmFewkDMX7tt2LRQOO6iAiY1VK2ovEdjFEwr5OYP6aqY0cjm2Xwn_6hhMStwdCVxTPL8Qgd-sOrvGP1ajMNgcPGCxB3ej17__Le0t8r73Ev7zSP8WxdziYJpLERUvJ4fo2F2WK-L1_w" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="934" data-original-width="1186" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEizfBhCa3pwijS5EKHOYNrTS16pwubP2XFfMVkg5XDyW97bomQZmFewkDMX7tt2LRQOO6iAiY1VK2ovEdjFEwr5OYP6aqY0cjm2Xwn_6hhMStwdCVxTPL8Qgd-sOrvGP1ajMNgcPGCxB3ej17__Le0t8r73Ev7zSP8WxdziYJpLERUvJ4fo2F2WK-L1_w" width="305" /></a></div></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">This is much harder to fit a correction factor for. The problem is that the two different directions (the precomputed one and the actual one) can be quite different.</div><div style="text-align: justify;">Same distance, one kernel hits at polar angle Pi/3,0, the second -Pi/3,Pi/3. How do you adjust the mip (roughness) to make one match the other?</div><div style="text-align: justify;"><br /></div><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgbfeKkamjG-gIfJBJVJUD2xmdkvikWws0uNoDW1kq2WCOiTUXO1AffrhRKbHWDxsZvJ7c0G75Asj_ARE3EtZPpP7qLCjCd2kHLX-t_bagb5wlQRHjUvltjHXx0qqR9FnRnGAqj03VJz9kZ_xfRGGgrjdTCNJNr032iVvOdI8HV-uZ_eGpuZpb1OQvpcw"><img height="200" src="https://blogger.googleusercontent.com/img/a/AVvXsEgbfeKkamjG-gIfJBJVJUD2xmdkvikWws0uNoDW1kq2WCOiTUXO1AffrhRKbHWDxsZvJ7c0G75Asj_ARE3EtZPpP7qLCjCd2kHLX-t_bagb5wlQRHjUvltjHXx0qqR9FnRnGAqj03VJz9kZ_xfRGGgrjdTCNJNr032iVvOdI8HV-uZ_eGpuZpb1OQvpcw=w200-h200" width="200" /></a></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><div>One possible idea is to consider how different is the intersection at an angle and the corresponding perpendicular one.</div><div>If we have a function that goes from angle,distance -> an isotropic, perpendicular kernel (roughness', angle=0, same distance) then we could maybe go from the real footprint we need for specular to an isotropic footprint, and from the real footprints that we have in the cubemap mips to the isotropic and search for the closest match between the two isotropic projections.</div></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The problem here is that really, with a single fetch/isotropic kernel, it doesn't seem that there a lot to gain by changing the roughness as function of the angle. </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">In the following, I grapth projections at an angle compared to perpendicular lobe (GGX D term only). </div><div style="text-align: justify;">All graphs are with alpha = 0.1, distance = plane size (so it's equivalent to the kernel at the center of a prefiltered cubemap when you ignore the slant). </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Pi/6 - the two lobes seem "visually" very close:</div><div style="text-align: justify;"><br /></div><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjDzufxxADuzpikfeM4ei8sFhaiRIm3rSpzZZjnbv_pbEe6WihdW1uSz1rq_2pJG14I8eSkUwR_lPUOWvIZPWhnBkcUZWMdFopZkI61vA9qnJ1teWVz5WlQe29D833RZOHQIxCeBDQMAgg9SPjFA5iTcWtv209bTUXzKwih634WO_vsntfPHxv7EDxroQ"><img height="98" src="https://blogger.googleusercontent.com/img/a/AVvXsEjDzufxxADuzpikfeM4ei8sFhaiRIm3rSpzZZjnbv_pbEe6WihdW1uSz1rq_2pJG14I8eSkUwR_lPUOWvIZPWhnBkcUZWMdFopZkI61vA9qnJ1teWVz5WlQe29D833RZOHQIxCeBDQMAgg9SPjFA5iTcWtv209bTUXzKwih634WO_vsntfPHxv7EDxroQ=w200-h98" width="200" /></a></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">At Pi/2.5 we get a very long "tail" but note that the width of the central part of the kernel seems still to fit the isotropic fetch without any change of roughness.</div><div style="text-align: justify;"><br /></div><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhpJdGsSQo2T14rOmIs5dDUeBBTtf538qPe3NVlUpZug2DkdRmuwpprFC-qc5No3OjCcVoCXdC5Jg7yZGk6zSRQZKlnICcOAm9Uy-yvH6MzcaeHFpA2-7ElzehpEEnXUJdSTdBVTRazhQ7EoyUB5VvTKXlUuZFBUWf_2yh4KtscA7jn5Vgc6Fbse5ElEg"><img height="103" src="https://blogger.googleusercontent.com/img/a/AVvXsEhpJdGsSQo2T14rOmIs5dDUeBBTtf538qPe3NVlUpZug2DkdRmuwpprFC-qc5No3OjCcVoCXdC5Jg7yZGk6zSRQZKlnICcOAm9Uy-yvH6MzcaeHFpA2-7ElzehpEEnXUJdSTdBVTRazhQ7EoyUB5VvTKXlUuZFBUWf_2yh4KtscA7jn5Vgc6Fbse5ElEg=w200-h103" width="200" /></a></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Now here "seems to fit" really doesn't mean much. What we should do is to look at rendered results, compare to ground truth / best effort (i.e. using sampling instead of prefiltering, whilst still using the assumption of representing radiance with the baked, localized cubemap), and if we want to then use numerical methods, do so with an error measure based on some perceptual metric.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">And this is what I did, but failed to find any reasonable correction, keeping the limitation of a single fetch. The only hope is to turn to multiple fetches, and optimize the preconvolution specifically to bake data that is useful for the reconstruction, not using a GGX prefiltering necessarily.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">I suspect that actually the long anisotropic tail created by the BRDF specular lobe is not, visually, an huge issue. </div><div style="text-align: justify;">The problem that what we get is (also) the opposite, from the point of view of the reconstruction, we get tails "baked" into the prefiltered cube at arbitrary angles (compared to the angles we need for specular on surfaces), and these long tails create artifacts.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">To account for that, the prefiltering step should probably take directly into account the proxy geometry shape. I.e. if these observations are correct, they point towards the idea that parallax-corrected cubemaps should be filtered by a fixed distance (relative to projected texel size), perpendicular to the proxy plane kernel. </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">That way when we query the cubemap we have only to convert the projected specular kernel to a kernel perpendicular to the surface (which would be ~ the same kernel we get at that roughness and same distance, just perpendicular), and then look in the mip chain the roughness that gives us a similar prefiltered image, by doing a distance-ratio-to-roughness adjustment as described in the first part of this text. </div><div style="text-align: justify;"><br /></div><div style="text-align: center;"><img height="150" src="https://lh6.googleusercontent.com/8r8SAcqGMIE2uICWpsZx1TGfXjp-M7wQVFczotvGMEqjEh3iGb9TjUnN8FR6KwD8yTRbA7kwSzWABhN1Ym1NvOzXqQJw-VreOBcxkEvok78jaaZZyXNLK6UNzY0XqEXxqFXr6ctspML8YBQeJloNkBw=w200-h150" width="200" /></div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><br /></div></span>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-27289639192855688442023-03-15T11:40:00.004-07:002023-03-28T15:44:29.334-07:00 Half baked: Dynamic Occlusion Culling<p style="text-align: justify;"><span style="font-family: arial;">The following doesn't work (yet), but I wanted to write something down both to put it to rest for now, as I prepare for GDC, and perhaps to show the application of some of the ideas I recently <a href="http://c0de517e.blogspot.com/2023/02/how-to-render-it-ten-ideas-to-solve.html">wrote about here</a>.</span></p><p style="text-align: justify;"><span style="font-family: arial;">A bit of context. Occlusion culling (visibility determination) per se is far from a solved problem in any setting, but for us (Roblox) it poses a few extra complications:</span></p><p style="text-align: justify;"></p><ol><li><span style="font-family: arial;">We don't allow authoring of "technical details" - so no artist-crafted occluders, cells and portals, and the like.</span></li><li><span style="font-family: arial;">Everything might move - even if we can reasonably guess what is dynamic in a scene, anything can be changed by a <a href="https://luau-lang.org/">LuaU</a> script.</span></li><li><span style="font-family: arial;">We scale down to very low-power and older devices - albeit this might not necessarily be a hard constraint here, as we could always limit the draw distance on low-end to such degrees that occlusion culling would become less relevant. But it's not ideal, of course.</span></li></ol><p></p><p style="text-align: justify;"><span style="font-family: arial;">That said, let's start and find some ideas on how we could solve this problem, by trying to imagine <a href="https://c0de517e.blogspot.com/2015/03/design-optimization-landscape.html">our design landscape</a> and its possible branches. </span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjs_EigDfOyfww1Ue03EZVmhabmhwx00uVIwm6llf0Ht6fVCjQb5urTAG9QiaUX_A90j4oPij2kQVLgAk4LTaKMk-cC77WNHj6VlzLJkHFR2EMoSlfJDHkJdCCKRK37zaGCkxKADoQTnkvVLcU9L1h1Medq_ECV2unCFXdoR9rqBwA5pkJ5Nm2reYEzhQ" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="600" data-original-width="1000" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEjs_EigDfOyfww1Ue03EZVmhabmhwx00uVIwm6llf0Ht6fVCjQb5urTAG9QiaUX_A90j4oPij2kQVLgAk4LTaKMk-cC77WNHj6VlzLJkHFR2EMoSlfJDHkJdCCKRK37zaGCkxKADoQTnkvVLcU9L1h1Medq_ECV2unCFXdoR9rqBwA5pkJ5Nm2reYEzhQ=w400-h240" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Image from https://losslandscape.com/gallery/</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;"><b>Real-time "vs" Incremental</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">I'd say we have a first obvious choice, given the dynamic nature of the world. Either we try to do most of the work in real-time, or we try to incrementally compute and cache some auxiliary data structures, and we'd have then to be prepared to invalidate them when things move.</span></p><p style="text-align: justify;"><span style="font-family: arial;">For the real-time side of things everything (that I can think of) revolves around some form of testing the depth buffer, and the decisions lie in where and when to generate it, and when and where to test it. </span></p><p style="text-align: justify;"><span style="font-family: arial;">Depth could be generated on the GPU and read-back, typically a frame or more late, to be tested on CPU, it could be generated and tested on GPU, if our bottlenecks are not in the command buffer generation (either because we're that fast, or because we're doing GPU-driven rendering), or it could be both generated and tested on CPU, via a software raster. Delving deeper into the details reveals even more choices. </span></p><p style="text-align: justify;"><span style="font-family: arial;">On GPU you could use occlusion queries, predicated rendering, or a "software" implementation (shader) of the same concepts, on CPU you would need to have a heuristic to select a small set of triangles as occluders, make sure the occluders themselves are not occluded by "better" ones and so on.</span></p><p style="text-align: justify;"><span style="font-family: arial;">All of the above, found use in games, so on one hand they are techniques that we know could work, and we could guess the performance implications, upsides, and downsides, and at the same time there is a lot that can still be improved compared to the state of the art... but, improvements at this point probably lie in relatively low-level implementation ideas. </span></p><p style="text-align: justify;"><span style="font-family: arial;">E.g. trying to implement a raster that works "conservatively" in the sense of occlusion culling is still hard (no, it's not the same as conservative triangle rasterization), or trying to write a parallelized raster that still allows doing occlusion tests while updating it, to be able to occlude-the-occluders while rendering them, in the same frame, things of that nature. </span></p><p style="text-align: justify;"><span style="font-family: arial;">As I wanted to explore more things that might reveal "bigger" surprises, I "shelved" this branch...</span></p><p style="text-align: justify;"><span style="font-family: arial;">Let's then switch to thinking about incremental computation and caching.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>Caching results or caching data to generate them?</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">The first thing that comes to mind, honestly, is just to cache the results of our visibility queries. If we had a way to test the visibility of an object, even after the fact, then we could use that to incrementally build a PVS. Divide the world into cells of some sort, maybe divide the cells per viewing direction, and start accumulating the list of invisible objects.</span></p><p style="text-align: justify;"><span style="font-family: arial;">All of this sounds great, and I think the biggest obstacle would be to know when the results are valid. Even offline, computing a PVS from raster visibility is not easy, you are sampling the space (camera positions, angles) and the raster results are not exact themselves, so, you can't know that your data structure is absolutely right, you just trust that you sampled enough that no object was skipped. For an incremental data structure, we'd need to have a notion of "probability" of it being valid.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>You can see a pattern here by now, a way of "dividing and conquering" the idea landscape, the more you think about it, the more you find branches and decide which ones to follow, which ones to prune, and which ones to shelve. </i></span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>Pruning happens either because a branch seems too unlikely to work out, or because it seems obvious enough (perhaps it's already well known or we can guess with low risk) that it does not need to be investigated more deeply (prototyping and so on). </i></span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>Shelving happens when we think something needs more attention, but we might want to context-switch for a bit to check other areas before sorting out the order of exploration...</i></span></p><p style="text-align: justify;"><span style="font-family: arial;">So, going a bit further here, I imagined that visibility could be the property of an object - a visibility function over all directions, for each direction the maximum distance at which it would be unoccluded - or the property of the world, i.e. from a given region, what can that region see. The object perspective, even if intriguing, seems a mismatch both in terms of storage and in terms of computation, as it thinks of visibility as a function - which it is, but one that is full of discontinuities that are just hard to encode.</span></p><p style="text-align: justify;"><span style="font-family: arial;">If we think about world, then we can imagine either associating a "validity" score to the PVS cells, associating a probability to the list of visible objects (instead of being binary), or trying to dynamically create cells. We know we could query, after rendering, for a given camera the list of visible objects, so, for an infinitesimal point in 5d space, we can create a perfect PVS. From there we could cast the problem as how to "enlarge" our PVS cells, from infinitesimal points to regions in space. </span></p><p style="text-align: justify;"><span style="font-family: arial;">This to me, seems like a viable idea or at least, one worth exploring in actual algorithms and prototypes. Perhaps there is even some literature about things of this nature I am not aware of. Would be worth some research, so for now, let's shelve it and look elsewhere!</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>Occluders</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Caching results can be also thought of as caching visibility, so the immediate reaction would be to think in terms of occluder generation as the other side of the branch... but it's not necessarily true. In general, in a visibility data structure, we can encode the occluded space, or the opposite, the open space. </span></p><p style="text-align: justify;"><span style="font-family: arial;">We know of a popular technique for the latter, portals, and we can imagine these could be generated with minimal user intervention, as Umbra 3 introduced many years ago the idea of deriving them through scene voxelization.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhN4UYaD8hrMmt9e7B17fONIfkDI0ZHhYtSTVieInSIpplW6uCopb6vDGEoymmrweXCDHfb4kMSMbbLFB7e-zJJolkdaPTodoUxJbT9vteNUAhkCKckifUdcuRk7fuSV7Uok5u3MYLUxdnr9g6NedbKmBbC-i1HuueVvrc9lPzTAR3fmcDeEcl4ud_e8Q" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="154" data-original-width="575" height="172" src="https://blogger.googleusercontent.com/img/a/AVvXsEhN4UYaD8hrMmt9e7B17fONIfkDI0ZHhYtSTVieInSIpplW6uCopb6vDGEoymmrweXCDHfb4kMSMbbLFB7e-zJJolkdaPTodoUxJbT9vteNUAhkCKckifUdcuRk7fuSV7Uok5u3MYLUxdnr9g6NedbKmBbC-i1HuueVvrc9lPzTAR3fmcDeEcl4ud_e8Q=w640-h172" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="https://medium.com/@Umbra3D/introduction-to-occlusion-culling-3d6cfb195c79">Introduction to Occlusion Culling | by Umbra 3D | Medium</a></td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;">It's realistic to imagine that the process could be made incremental, realistic enough that we will shelve this idea as well...</span></p><p style="text-align: justify;"><span style="font-family: arial;">Thinking about occluders seem also a bit more natural for an incremental algorithm, not a big difference, but if we think of portals, they make sense when most of the scene is occluded (e.g. indoors), as we are starting with no information, we are in the opposite situation, where at first the entire scene is disoccluded, and progressively might start discovering occlusion, but hardly "in the amount" that would make most natural sense to encode with something like portals. </span><span style="font-family: arial;">There might be other options there, it's definitely not a dead branch, but it feels unlikely enough that we might want to prune it.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Here, is where I started going from "pen and paper" reasoning to some prototypes. I still think the PVS idea that we "shelved" might get here as well, but I chose to get to the next level on occluder generation for now. </span></p><p style="text-align: justify;"><span style="font-family: arial;">From here on the process is still the same, but of course writing code takes more time than rambling about ideas, so we will stay a bit longer on one path before considering switching. </span></p><p style="text-align: justify;"><span style="font-family: arial;">When prototyping I want to think of what the real risks and open questions are, and from there find the shortest path to an answer, hopefully via a proxy. I don't need at all to write code that implements the way I think the idea will work out if I don't need to - a prototype is not a bad/slow/ugly version of the final product, it can be an entirely different thing from which we can nonetheless answer the questions we have.</span></p><p style="text-align: justify;"><span style="font-family: arial;">With this in mind, let's proceed. What are occluders? A simplified version of the scene, that guarantees (or at least tries) to be "inside" the real geometry, i.e. to never occlude surfaces that the real scene would not have occluded. </span></p><p style="text-align: justify;"><span style="font-family: arial;">Obviously, we need a simplified representation, because otherwise solving visibility would be identical to rendering, minus shading, in other words, way too expensive. Also obvious that the guarantee we seek cannot hold in general in a view-independent way, i.e. there's no way to compute a set of simplified occluders for a polygon soup from any point of view, because polygon soups do not have well-defined inside/outside regions.</span></p><p style="text-align: justify;"><span style="font-family: arial;">So, we need to simplify the scene, and either accept some errors or accept that the simplification is view-dependent. How? Let's talk about spaces and data structures. As we are working on geometry, the first instinct would be to somehow do computation on the meshes themselves, in object and world space. </span></p><p style="text-align: justify;"><span style="font-family: arial;">It is also something that I would try to avoid, pruning that entire branch of reasoning, because geometric algorithms are among the hardest things known to mankind, and I personally try to avoid writing them as much as I can. I also don't have much hope for them to be able to scale as the scene complexity increases, to be robust, and so on (albeit I have to say, wizards at Roblox working on our real-time CSG systems have cracked many of these problems, but I'm not them).</span></p><p style="text-align: justify;"><span style="font-family: arial;">World-space versus screen-space makes sense to consider. For data structures, I can imagine point clouds and voxels of some sort to be attractive.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>First prototype: Screen-space depth reprojection</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Took a looong and winding road to get here, but this is one of the most obvious ideas as CryEngine 3 showed it to be working more than ten years ago. </span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgTiKXWXiDiipz1QDectEeoBRntuXWZhr-__FJwTBwHaw9p7aKP6lL4AJUav71M-FT_7pdZk_VOwlvH22xuPh3XQ68kGmnLIKPwraGso5wgVfkd1l9RkPUX06Zx_fGpyslUBJ_p-5ypAZJLTbtRwxESqoNCgyc1UHKKdnmJe_emy0QxHPRga17k99f4nA" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="513" data-original-width="925" height="177" src="https://blogger.googleusercontent.com/img/a/AVvXsEgTiKXWXiDiipz1QDectEeoBRntuXWZhr-__FJwTBwHaw9p7aKP6lL4AJUav71M-FT_7pdZk_VOwlvH22xuPh3XQ68kGmnLIKPwraGso5wgVfkd1l9RkPUX06Zx_fGpyslUBJ_p-5ypAZJLTbtRwxESqoNCgyc1UHKKdnmJe_emy0QxHPRga17k99f4nA" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="http://www.klayge.org/material/4_1/SSR/S2011_SecretsCryENGINE3Tech_0.pdf">Secrets of CryEngine 3</a></td></tr></tbody></table><span style="font-family: arial;"><br /></span><p></p><p style="text-align: justify;"><span style="font-family: arial;">I don't want to miscredit this, but I think it was Anton Kaplanyan's work (if I'm wrong let me know and I'll edit), and back then it was dubbed "coverage buffer", albeit I'd discourage the use of the word as it already had a different meaning (the c-buffer is a simpler version of the span-buffer, a way to accelerate software rasterization by avoiding to store a depth value per pixel). </span></p><p style="text-align: justify;"><span style="font-family: arial;">They simply took the scene depth after rendering, downsampled it, and reprojected - by point splatting - from the viewpoint of the next frame's camera. This creates holes, due to disocclusion, due to lack of information at the edges of the frame, and due to gaps between points. CryEngine solved the latter by running a dilation filter, able to eliminate pixel-sized holes, while just accepting that many draws will be false positive due to the other holes - thus not having the best possible performance, but still rendering a correct frame. </span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgLz17rxcdFAlN3FNSDJ-VWy3H21Z6iJQ1fEw2yRRRNougyQs47iunEtLkyeJOW_tK-EPTzNg_lysbc6kZKj9dGkv03RhxJ2aePHxxTysEc1ff7GuCMkyPxfn4SIpOdigMMf2VylgytW57901ViFm0bBF361XP_yYmfbhumqfCPcPke5Ceo2ziK6PJr8w" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="190" data-original-width="330" height="184" src="https://blogger.googleusercontent.com/img/a/AVvXsEgLz17rxcdFAlN3FNSDJ-VWy3H21Z6iJQ1fEw2yRRRNougyQs47iunEtLkyeJOW_tK-EPTzNg_lysbc6kZKj9dGkv03RhxJ2aePHxxTysEc1ff7GuCMkyPxfn4SIpOdigMMf2VylgytW57901ViFm0bBF361XP_yYmfbhumqfCPcPke5Ceo2ziK6PJr8w" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Holes, in red, due to disocclusions and frame edges.</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;">This is squarely in the realm of real-time solutions though, what are we thinking? </span></p><p style="text-align: justify;"><span style="font-family: arial;">Well, I was wondering if this general idea of having occluders from a camera depthbuffer could be generalized a bit more. First, we could think of generating actual meshes - world-space occluders, from depth-buffer information. </span></p><p style="text-align: justify;"><span style="font-family: arial;">As we said above, these would not be valid from all view directions, but we could associate the generated occluders from a set of views where we think they should hold up.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Second, we could keep things as point clouds and use point splatting, but construct a database from multiple viewpoints so we have more data to render occluder and fill the holes that any single viewpoint would create.</span></p><p style="text-align: justify;"><span style="font-family: arial;">For prototyping, I decided to use Unity, I typically like to mix things up when I write throwaway code, and I know <a href="http://c0de517e.blogspot.com/2016/07/unity-101-rendering.html">Unity enough</a> that I could see a path to implement things there. I started by capturing the camera depth buffer, downsampling, and producing a screen-aligned quad-mesh I could displace, effectively like a heightfield. This allowed me to write everything via simple shaders, which is handy due to Unity's hot reloading.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjtbOASkd2VbBLq2iKyUhoLWqKJHwDjRjxCmcpqUcKFIsOuuQHxui1tH1eNj9S4as2dNSWlQ8v45Jf2vmVt-o-_QJr-wBHfBpxpGXzlDW2wdsHpKqoLUu9SVTS3O8JJhkD1gPMfcebjqA1dxWppL6-ObxiYRMIvtiLgH5xwqzjldQiMStYnVEbVIimxsg" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="653" data-original-width="1240" height="169" src="https://blogger.googleusercontent.com/img/a/AVvXsEjtbOASkd2VbBLq2iKyUhoLWqKJHwDjRjxCmcpqUcKFIsOuuQHxui1tH1eNj9S4as2dNSWlQ8v45Jf2vmVt-o-_QJr-wBHfBpxpGXzlDW2wdsHpKqoLUu9SVTS3O8JJhkD1gPMfcebjqA1dxWppL6-ObxiYRMIvtiLgH5xwqzjldQiMStYnVEbVIimxsg" width="320" /></a></span></div><span style="font-family: arial;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiB4KDULm9FISlwt7NtELsCS0Xvocz66q42XQITtvfdxy7zNsdeuZg3MV3L0uvWQVU-noegU7nYI-fGoiTTnUHDF81PNaMdKz4YizPE-CVoP8qT94_vfIWm4DN1A85pUATBOVpWylDMp7IYHhoE9tG0reF34HT7G6ZqGy64xKRsaQSSd3EzN-NedUiGZw" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="721" data-original-width="1258" height="183" src="https://blogger.googleusercontent.com/img/a/AVvXsEiB4KDULm9FISlwt7NtELsCS0Xvocz66q42XQITtvfdxy7zNsdeuZg3MV3L0uvWQVU-noegU7nYI-fGoiTTnUHDF81PNaMdKz4YizPE-CVoP8qT94_vfIWm4DN1A85pUATBOVpWylDMp7IYHhoE9tG0reF34HT7G6ZqGy64xKRsaQSSd3EzN-NedUiGZw" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Test scene, and a naive "shrink-wrap" mesh generated from a given viewpoint</td></tr></tbody></table></span><p style="text-align: justify;"><span style="font-family: arial;">Clearly, this results in a "shrink-wrap" effect, and the generated mesh will be a terrible occluder from novel viewpoints, so we will want to cut it around discontinuities instead. In the beginning, I thought about doing this by detecting, as I'm downsampling the depth buffer, which tiles can be well approximated by a plane, and which contain "complex" areas that would require multiple planes. </span></p><p style="text-align: justify;"><span style="font-family: arial;">This is a similar reasoning to how hardware depth-buffer compression typically works, but in the end, proved to be silly.</span></p><p style="text-align: justify;"><span style="font-family: arial;">An easier idea is to do an edge-detection pass in screen-space, and then simply observe which tiles contain edges and which do not. For edge detection, I first generated normals from depth (and here I took a digression trying and failing to improve on the state of the art), then did two tests.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEg6UEOgPHtBEZloFPkQvDATimYbevk8P1wxEvsZgJcKmuvRESFjhEKK1YCeHN1SImtKsh_Yux17Fb9SDRudR1geeZvLAz6UwC5ADUPlMoQr6-FIPexLlwGoQIhCroQq41fpkqqJOjuU5vrUqJRWHEq4W2nIfUZXXGXyDloqL8t04W5QWDIWXEzpqAit6w" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="1640" data-original-width="665" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEg6UEOgPHtBEZloFPkQvDATimYbevk8P1wxEvsZgJcKmuvRESFjhEKK1YCeHN1SImtKsh_Yux17Fb9SDRudR1geeZvLAz6UwC5ADUPlMoQr6-FIPexLlwGoQIhCroQq41fpkqqJOjuU5vrUqJRWHEq4W2nIfUZXXGXyDloqL8t04W5QWDIWXEzpqAit6w" width="97" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A digression...</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;">First, if neighboring pixels are close in 3d space, we consider them connected and do not generate an edge. If they are not close, we do a second test by forming a plane with the center pixel and its normal and looking at the point-to-plane distance. This avoids creating edges connected geometry that just happens to be at a glancing angle (high slope) in the current camera view.</span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEicD5lggSDNrA7TPH7ArbAk-u-yW9jujQlmFwBjubs4UbmWUT8hlTikVfizmL6yz_G0UZH0m8E4JxaPBawptY3fXJZkNhJm1oYyk-nG3H4yMTygu3cToVnST_ZisNs9CiDc16u2VNPAE9d090qcPy76omzKHVK-v1fET9N0Xr_8ODudJUFJiW_P2fS16g" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="475" data-original-width="885" height="172" src="https://blogger.googleusercontent.com/img/a/AVvXsEicD5lggSDNrA7TPH7ArbAk-u-yW9jujQlmFwBjubs4UbmWUT8hlTikVfizmL6yz_G0UZH0m8E4JxaPBawptY3fXJZkNhJm1oYyk-nG3H4yMTygu3cToVnST_ZisNs9CiDc16u2VNPAE9d090qcPy76omzKHVK-v1fET9N0Xr_8ODudJUFJiW_P2fS16g" width="320" /></a><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgK3pzrHHKE6DbpYYupY3rcZCiEsDweG6femBPHO2U9qokE5KlfxOq-FryMdeASiajH0-s4LGBl2rlIXFdM4jDQGiNikbMBThm70XFjGpT0cqJuojKdgLYCt5ynUpHOXIHHfwSmjN-IwsBeQ9Ms8UHuJe_2K192ktDbD0Hya_IPfq7FLMaH_pr1TindhA" style="font-family: arial; margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="475" data-original-width="887" height="171" src="https://blogger.googleusercontent.com/img/a/AVvXsEgK3pzrHHKE6DbpYYupY3rcZCiEsDweG6femBPHO2U9qokE5KlfxOq-FryMdeASiajH0-s4LGBl2rlIXFdM4jDQGiNikbMBThm70XFjGpT0cqJuojKdgLYCt5ynUpHOXIHHfwSmjN-IwsBeQ9Ms8UHuJe_2K192ktDbD0Hya_IPfq7FLMaH_pr1TindhA" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjutKnbWgRfRq0kwgZm2XLgo5AVsdl9ECaCgmu8sWojEJ6dkyLL2avby1hVyDfNaBK-USnnNoZ2M1rDNxvfFLJsz93u_EIXAVgCBPREuiQ9uXtYHoWKj0ufGdRYxQ6q9kWUoPSwsjzSqyR-YH-ONAntBJwjdmt_YWwWHQoGhazZYUAsxlgh0X9NzM0Wng" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="472" data-original-width="883" height="171" src="https://blogger.googleusercontent.com/img/a/AVvXsEjutKnbWgRfRq0kwgZm2XLgo5AVsdl9ECaCgmu8sWojEJ6dkyLL2avby1hVyDfNaBK-USnnNoZ2M1rDNxvfFLJsz93u_EIXAVgCBPREuiQ9uXtYHoWKj0ufGdRYxQ6q9kWUoPSwsjzSqyR-YH-ONAntBJwjdmt_YWwWHQoGhazZYUAsxlgh0X9NzM0Wng" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Depth, estimated normals, estimated edge discontinuties.</td></tr></tbody></table></div><div class="separator" style="clear: both; text-align: center;"><br /></div><span style="font-family: arial;">As I'm working with simple shaders, I employ a simple trick. Each vertex of each quad in my mesh has two UVs, one corresponding to the vertex location - which would sample across texels in the heightmap, and one corresponding to the center of the quad, which would sample a single texel in the heightmap. </span><div><span style="font-family: arial;">In the vertex shader, if a vertex is hitting an "edge" texel when sampling the first UV set, it checks the quad center UV sample as well. If this is still on an edge texel, then the whole quad is part of an edge, and I send the vertex to NaN to kill the triangles. Otherwise, I just use the height from the second sample.</span><p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgxMk6MwffLDFSTTyKH45gnFCHnbab-EiJQ-CnVPJ6LpeabnQlkaWRyjRM0mWphjMthrG1BG_RFwMoIHuM2kpyJLe_m-5DLpbItU4vO5IAHKYru1ljBXYaGpjnOKjljkQMtUQwMbBHApNEAMN8efi-F8ctOtho3YTyiiSrMEEldUbeokX6PH0LWryRT-Q" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="697" data-original-width="1266" height="176" src="https://blogger.googleusercontent.com/img/a/AVvXsEgxMk6MwffLDFSTTyKH45gnFCHnbab-EiJQ-CnVPJ6LpeabnQlkaWRyjRM0mWphjMthrG1BG_RFwMoIHuM2kpyJLe_m-5DLpbItU4vO5IAHKYru1ljBXYaGpjnOKjljkQMtUQwMbBHApNEAMN8efi-F8ctOtho3YTyiiSrMEEldUbeokX6PH0LWryRT-Q" width="320" /></a></div><p></p><p style="text-align: justify;"><span style="font-family: arial;">In practice this is overly conservative as it generates large holes, we could instead push the "edge" quads to the farthest depth in the tile, which would hold for many viewpoints, or do something much more sophisticated to actually cut the mesh precisely, instead of relying on just quads. The farthest depth idea is also somewhat related to how small holes are filled in Crytek's algorithm if one squints enough...</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjswNaqWStwQRCnxi8hMCCE2beeqAe_Pfsc1PSJreF2tsW_ep6xvEIlIiqQbyeF_3AZQw1Xt1wJ1Bve21LDtMYNOFcNfbhtxwy-8cOJ7Ksle_RtAJv85gxgKBqtBZ3tw_AbfeDsvi0KYRADrsfB_GMh73YVdJgxWHBzuky7PkT4mGOL0oa8426spkCX7w" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="479" data-original-width="887" height="173" src="https://blogger.googleusercontent.com/img/a/AVvXsEjswNaqWStwQRCnxi8hMCCE2beeqAe_Pfsc1PSJreF2tsW_ep6xvEIlIiqQbyeF_3AZQw1Xt1wJ1Bve21LDtMYNOFcNfbhtxwy-8cOJ7Ksle_RtAJv85gxgKBqtBZ3tw_AbfeDsvi0KYRADrsfB_GMh73YVdJgxWHBzuky7PkT4mGOL0oa8426spkCX7w" width="320" /></a></span></div><p></p><p style="text-align: justify;"><span style="font-family: arial;">What seems interesting, anyhow, is that even with this rudimentary system we can find good, large occluders - and the storage space needed is minimal, we could easily hold hundreds of these small heightfields in memory...</span></p><p style="text-align: justify;"></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEipVQ511n3RRqvcM5Zrnck7Q492975VwHiPGEzDGOXOYK-joo2WiAUIRzI0G2ARrkgPycb6MDA5YR8SLat3BWlWa7hAdzHFe40cIchk12u9Q3uz0TWFYMv3o0WCu9WN-XCfeLj237IYHlPibQ3aksk5lH4pB2u1Qz39gr0MMVmYZV8WSTaYqVz87ZXL5g" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="475" data-original-width="884" height="172" src="https://blogger.googleusercontent.com/img/a/AVvXsEipVQ511n3RRqvcM5Zrnck7Q492975VwHiPGEzDGOXOYK-joo2WiAUIRzI0G2ARrkgPycb6MDA5YR8SLat3BWlWa7hAdzHFe40cIchk12u9Q3uz0TWFYMv3o0WCu9WN-XCfeLj237IYHlPibQ3aksk5lH4pB2u1Qz39gr0MMVmYZV8WSTaYqVz87ZXL5g" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Combining multiple (three) viewpoints</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;">So right now what I think would be possible is:</span></p><p style="text-align: justify;"></p><ul><li><span style="font-family: arial;">Keep the last depth and reproject plus close small holes from that, ala Crytek.</span></li><li><span style="font-family: arial;">Then try to fill the remaining holes by using data from other viewpoints. </span></li><li><span style="font-family: arial;">For each view we can have a bounding hierarchy by just creating min-max depth mips (a pyramid), so we can test the volumes against the current reprojection buffer. And we need only to "stencil" test, to see how much of a hole we could cover and with what point density.</span></li><li><span style="font-family: arial;">Rinse and repeat until happy...</span></li><li><span style="font-family: arial;">Test visibility the usual way (mip pyramid, software raster of bounding volumes...)</span></li><li><span style="font-family: arial;">Lastly, if the current viewpoint was novel enough (position and look-at direction) compared to the ones already in the database, consider adding its downsampled depth to the persistent database.</span></li></ul><p></p><p style="text-align: justify;"><span style="font-family: arial;">As all viewpoints are approximate, it's important not to try to merge them with a conventional depthbuffer approach, but to prioritize first the "best" viewpoint (the previous frame's one), and then use the other stored views only to fill holes, prioritizing views closer to the current camera.</span></p><p style="text-align: justify;"><span style="font-family: arial;">If objects move (that we did not exclude from occluder generation), we can intersect their bounding box with the various camera frustums, and either completely evict these points of view from the database, or go down the bounding hierarchy / min-max pyramid and invalidate only certain texels - so dynamic geometry could also be handled.</span></p><p style="text-align: justify;"><span style="font-family: arial;">The idea of generating actual geometry from depth probably also has some merit, especially for regions with simple occlusion like buildings and so on. The naive quad mesh I'm using for visualization could be simplified after displacement to reduce the number of triangles, and the cuts along the edges could be done precisely, instead of on the tiles. </span></p><p style="text-align: justify;"><span style="font-family: arial;">But it doesn't seem worth the time mostly because we would still have very partial occluders with big "holes" along the cuts, and merging real geometry from multiple points of view seems complex - at that point, we'd rather work in world-space, which brings to...</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>Second prototype: Voxels</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Why all the complications about viewpoints and databases, if in the end, we are working with point sets? Could we store these directly in world-space instead? Maybe in a voxel grid?</span></p><p style="text-align: justify;"><span style="font-family: arial;">Of course, we can! In fact, we could even just voxelize the scene in a separate process, incrementally, generating point clouds, signed distance fields, implicit surfaces, and so on... That's all interesting, but for this particular case, as we're working incrementally anyways, using the depth buffer is a particularly good idea. </span></p><p style="text-align: justify;"><span style="font-family: arial;">Going from depth to voxels is trivial, and we are not even limited to using the main camera depth, we could generate an ad-hoc projection from any view, using a subset of the scene objects, and just keep accumulating points / marking voxels.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Incidentally, working on this made me notice an equivalence that I didn't think of before. Storing a binary voxelization is the same as storing a point cloud if we assume (reasonably) that the point coordinates are integers. A point at a given integer x,y,z is equivalent to marking the voxel at x,y,z as occupied, but more interestingly, when you store points you probably want to compress them, and the obvious way to compress would be to cluster them in grid cells, and store grid-local coordinates at a reduced precision. This is exactly equivalent then again to storing binary voxels in a sparse representation. </span></p><p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiV5sNURr5mWJiuqYnQfxmDRfyylcpK1Xv6je2_QtvXd8GMV8DvToLGV2zkamdsDZbpTeQSHfdw0GA0XUb3O1-0pPr4_Yg2fnChms7ZWgmS6OxD7EBmpiucDO5qs5w4v6Kj7WyIuZmoZ64TKtK23CIQsbPjWPALi7NNvtMzGY4j5TcHAq5PUanC5mVujw/s598/Screenshot%202023-03-15%20105034.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="313" data-original-width="598" height="167" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiV5sNURr5mWJiuqYnQfxmDRfyylcpK1Xv6je2_QtvXd8GMV8DvToLGV2zkamdsDZbpTeQSHfdw0GA0XUb3O1-0pPr4_Yg2fnChms7ZWgmS6OxD7EBmpiucDO5qs5w4v6Kj7WyIuZmoZ64TKtK23CIQsbPjWPALi7NNvtMzGY4j5TcHAq5PUanC5mVujw/s320/Screenshot%202023-03-15%20105034.png" width="320" /></a></div><br /><p></p><p style="text-align: justify;"><span style="font-family: arial;">It is obvious, but it was important to notice for me because for a while I was thinking of how to store things "smartly", maybe allow for a fixed number of points/surfels/planes per grid and find ways to merge when adding new ones, all possible and fun to think about, but binary is so much easier. </span></p><p style="text-align: justify;"><span style="font-family: arial;">In my compute shader, I am a rebel bit-pack without even InterlockedOR because I always wanted to write code with data races that still converge to the correct result! </span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWTwOtdZnDDFPI_rTrtZ3ChC_NkrBPZpDrrjFPYJP7bMFiC7y0jlTk-66oTq-TdeQrca5a9fpB6wOuC5G3GJUatjqvqtWzCNjHpeHagf2rcb8_3WJ3KJRAQEmGNIfDbFCOi6UFbbfWuHkXA57o00etnDRYHZBSJy1owPc_7fM5jeoYjXANBX42k8Z70g/s1755/Screenshot%202023-03-15%20110128.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="762" data-original-width="1755" height="278" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWTwOtdZnDDFPI_rTrtZ3ChC_NkrBPZpDrrjFPYJP7bMFiC7y0jlTk-66oTq-TdeQrca5a9fpB6wOuC5G3GJUatjqvqtWzCNjHpeHagf2rcb8_3WJ3KJRAQEmGNIfDbFCOi6UFbbfWuHkXA57o00etnDRYHZBSJy1owPc_7fM5jeoYjXANBX42k8Z70g/w640-h278/Screenshot%202023-03-15%20110128.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">As the camera moves (left) the scene voxelization is updated (left)</td></tr></tbody></table><span style="font-family: arial;"><br />If needed, one could then take the binary voxel data and compute from it a coarser representation that encodes planes or SDFs, etc! This made me happy enough that even if it would be cute to figure out other representations, they all went into a shelve-mode. </span><p></p><p style="text-align: justify;"><span style="font-family: arial;">I spent some time thinking about how to efficiently write a sparse binary voxel, or how to render from it in parallel (load balancing the parallel work), how to render front-to-back if needed, all interesting problems but in practice, not worth yet solving. Shelve!</span></p><p style="text-align: justify;"></p><p style="text-align: justify;"><span style="font-family: arial;">The main problem with a world-space representation is that the error in screenspace is not bounded, obviously. If we get near the points, we see through them, and they will be arbitrarily spaced apart. We can easily use fewer points farther from the camera, but we have a fixed maximum density.</span></p><p style="text-align: justify;"><span style="font-family: arial;">The solution? Will need another blog post, because this is getting long... and here is where I'm at right now anyways!</span></p><p style="text-align: justify;"><span style="font-family: arial;">I see a few options I want to spend more time on:</span></p><p style="text-align: justify;"><span style="font-family: arial; text-align: left;">1) <u>Draw points as "quads" or ellipsoids etc</u>. This can be done efficiently in parallel for arbitrary sizes, it's similar to tile-based GPU particle rendering.</span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi11r5IK5dyUHYi0_j9Epdh0quvoOgxWz1Jjw-U7AjOgu_qed9LzFggmwE-uf67arax-HcqRJQz5xggCKLUHFCwpozt0FAtZAr0P5xnrsICUUgFmKm4iEo4-tEbjf_UoF9fD41c02YCKbBDVUJ5Qk1zi54Bu7T_xb8VHksL4jrEReA5Brs4ew94YwGyBA/s830/Screenshot%202023-03-15%20112105.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="564" data-original-width="830" height="217" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi11r5IK5dyUHYi0_j9Epdh0quvoOgxWz1Jjw-U7AjOgu_qed9LzFggmwE-uf67arax-HcqRJQz5xggCKLUHFCwpozt0FAtZAr0P5xnrsICUUgFmKm4iEo4-tEbjf_UoF9fD41c02YCKbBDVUJ5Qk1zi54Bu7T_xb8VHksL4jrEReA5Brs4ew94YwGyBA/s320/Screenshot%202023-03-15%20112105.png" width="320" /></a></div><div><br /></div><div><span style="font-family: arial;">We could even be clever, under the assumption that splats do not overlap much: we can send them to different tiles based on their size - forming a mipmap hierarchy of buckets. </span><span style="font-family: arial;">In that case, we know that for each bucket there is only a small fixed number of splats that could land. </span><span style="font-family: arial;">Then, walking per each pixel the hierarchy from the biggest splats/fewer tiles to the smallest, you even get approximate depth sorting!</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">2) We could do something more elaborate <u>to reconstruct a surface in screen-space</u> / fill holes.</span></div><div><br /></div><div><span style="font-family: arial;">Imperfect Shadow Maps used a push-pull pyramid to fill arbitrary-sized holes for example. In our case though we would need to be more careful to only join points that are supposed to be on the same surface, and not holes that were actually present in the scene... </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">A related problem would be on how to perform visibility on the point cloud itself, as clearly points father aways will poke in between closest points. That could be addressed with some kind of depth layers or a similar heuristic, allowing a near point to "occlude" a large number of background points, farther than a few voxels from it... </span></div><div><span style="font-family: arial;">These ideas have some research in the point cloud literature, but none is tailored to occlusion, which has different requirements.</span></div><div><span style="font-family: arial;"><br /></span></div><div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJWRtUfuJuqzHQ96S-Oa9SCScJrUj5XF-aN2MT6y_Zw5K24_2FFT3iovBUC1cLzbswuK_OAdDDv77Vu6qD7doWNH-Tg8ogzdVK23Qe4oxW3Ha0kncxhdoHZeqZHClMHWEKXtIdy3PoN3SrPh-D2M9xEsVL1lxSFSWRXstJE2oxyHmFkdfkDqA55TvacQ/s976/Screenshot%202023-03-15%20113145.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="326" data-original-width="976" height="107" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJWRtUfuJuqzHQ96S-Oa9SCScJrUj5XF-aN2MT6y_Zw5K24_2FFT3iovBUC1cLzbswuK_OAdDDv77Vu6qD7doWNH-Tg8ogzdVK23Qe4oxW3Ha0kncxhdoHZeqZHClMHWEKXtIdy3PoN3SrPh-D2M9xEsVL1lxSFSWRXstJE2oxyHmFkdfkDqA55TvacQ/s320/Screenshot%202023-03-15%20113145.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">From <a href="https://www.semanticscholar.org/paper/Real-time-Rendering-of-Massive-Unstructured-Raw-Pintus-Gobbetti/4fbea543a9c217202a60cf2aa660cf0df8dc14c7">[PDF] Real-time Rendering of Massive Unstructured Raw Point Clouds using Screen-space Operators | Semantic Scholar</a></td></tr></tbody></table><br /><span style="font-family: arial;"><br /></span></div><div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMoUV7l2aIW_Ixoz3-rVf97s4B07XiJ4xxPyyizbifbPo566k6hJg7jn54CjeOC26oAvIQZk_cn8xEvYo4Yy6K_IYTw1_xFMEX2K43zLs4qLCi5152cK_mmSjYnp_2KS9daAOzcL99DFPGC9ruOsEPfPAURIKZ6nX3rGLSqxDnCZrdtkM2LFlXM9oIPA/s1244/Screenshot%202023-03-15%20113254.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="337" data-original-width="1244" height="87" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMoUV7l2aIW_Ixoz3-rVf97s4B07XiJ4xxPyyizbifbPo566k6hJg7jn54CjeOC26oAvIQZk_cn8xEvYo4Yy6K_IYTw1_xFMEX2K43zLs4qLCi5152cK_mmSjYnp_2KS9daAOzcL99DFPGC9ruOsEPfPAURIKZ6nX3rGLSqxDnCZrdtkM2LFlXM9oIPA/s320/Screenshot%202023-03-15%20113254.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">From <a href="https://hal.science/hal-01959578/document">Raw point cloud deferred shading through screen space pyramidal operators (hal.science)</a> - see also <a href="https://www.lcg.ufrj.br/marroquim/publications/pdfs/marroquim-pbg2007.pdf">marroquim-pbg2007.pdf (ufrj.br)</a></td></tr></tbody></table><br /></div><p style="text-align: justify;"><span style="font-family: arial;">3) We could <u>reconstruct a surface</u> for near voxels, either by producing an actual mesh (which we could cache, and optimize) or by raymarching (gives the advantage of being able to stop at first intersection). </span></p><p style="text-align: justify;"><span style="font-family: arial;">We'd still points at a distance, when we know they would be dense enough for simple dilation filters to work, and switch to the more expensive representation only for voxels that are too close to the camera to be treated as points. </span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgn3OzDqOIc2uuLYi-UuWj8urpEsp9G_tOvTc_CO7DiwPw_PjO-DDDgaksLTITYuSZd_l-eAY9T1pIDv-GOlkfXgqqfCEUhGvk-Gg9psrljJMo5XFgilizcrofPJlZCicNMzHTX5jW3z8dT5LGWAdTLtM1ce5maMjpJbayFWSsFEOAuQgA0QR3RIwQmA/s1150/Screenshot%202023-03-09%20172255.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="565" data-original-width="1150" height="314" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgn3OzDqOIc2uuLYi-UuWj8urpEsp9G_tOvTc_CO7DiwPw_PjO-DDDgaksLTITYuSZd_l-eAY9T1pIDv-GOlkfXgqqfCEUhGvk-Gg9psrljJMo5XFgilizcrofPJlZCicNMzHTX5jW3z8dT5LGWAdTLtM1ce5maMjpJbayFWSsFEOAuQgA0QR3RIwQmA/w640-h314/Screenshot%202023-03-09%20172255.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Inspired by MagicaVoxel's binary MC (see <a href="https://www.shadertoy.com/view/fttyzM">here a shadertoy</a> version) - made a hack that could be called "binary sufrace nets". Note that this is at half the resolution of the previous voxel/point clouds images, and still holds up decently.</td></tr></tbody></table><span style="font-family: arial; text-align: justify;"><br /></span></div><div><span style="font-family: arial; text-align: justify;">4) We could <u>hybridize</u> with the previous idea, and use the depth from the last frame as an initial reprojection, while then fetching from the point cloud/voxel representation for hole-filling (we'd still need some way of dealing with variable point density, but it might matter less if it's only for a few holes).</span></div><div><span style="font-family: arial; text-align: justify;"><br /></span></div><div style="text-align: justify;"><span style="font-family: arial;">I think this is <u>the most promising</u> direction, it makes caching trivial, while side-stepping the biggest issues with world-space occluders, which is the fact that even a tiny error (say, 1 centimeter) if seen up close enough (in front of your virtual nose) would cause huge mis-occlusions. </span></div><div style="text-align: justify;"><span style="font-family: arial;"><br /></span></div><div style="text-align: justify;"><span style="font-family: arial;">If we used the previous screenspace Z as an initial occlusion buffer, and then augment that with the world-space point cloud, we could render the latter with a near plane that is pushed far enough for the approximation error not to be problematic, while still filling the holes that the reprojection would have. Yes, the holes will still miss some occluders, as now we're not using the cache until a given distance, and worst case we could peek behind a wall causing lots of objects to be rendered... but realtime rendering is the art of finding the best compromises...</span></div><br />DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-46374147561842957042023-03-04T14:50:00.007-08:002023-03-06T11:37:40.221-08:00 Hidden in plain sight: The mundanity of the Metaverse<p style="text-align: justify;"><span style="font-family: arial;"><br />Don’t you hate it when words get stolen? Now, we won’t ever have a “web 3”, that version number has been irredeemably coopted by scammers or worse, tech-bros that live a delusion of changing the world with their code, blindly following their ideology without ever trying to connect to the humanity code’s meant to serve.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Well, this is what happened to “the metaverse”. It didn’t help that it never had a solid definition, to begin with (I tried to <a href="http://c0de517e.blogspot.com/2022/02/wtf-is-metaverse.html">craft one here</a>), and then the hype train came and EVERYTHING needed to be marketed as either a metaverse or for the metaverse.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><br /></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjB9MK3bzojyF1wC5LqVM78pDJtY92WpqcaZc-AF9E5tB5DMZaEiUeS5x6dySlxOEkD_mXZltR6UGMSOBEjDcJ2TPb2Ib16cDngjpxohoabnCPb6gJofia67NNHpI0KLmZjQDwEroqVzhIQGqneOGaUw2XBzUliD6o1-L7HERkwco7gth0CRdLz24oAlg" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="590" data-original-width="1178" height="160" src="https://blogger.googleusercontent.com/img/a/AVvXsEjB9MK3bzojyF1wC5LqVM78pDJtY92WpqcaZc-AF9E5tB5DMZaEiUeS5x6dySlxOEkD_mXZltR6UGMSOBEjDcJ2TPb2Ib16cDngjpxohoabnCPb6gJofia67NNHpI0KLmZjQDwEroqVzhIQGqneOGaUw2XBzUliD6o1-L7HERkwco7gth0CRdLz24oAlg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The straw that broke this camel's back...</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;">The final nail in the word’s coffin fell down when notoriously, a big social networking company, looking at the data on its userbase and monetization trending down, decided it was the time for a BOLD move, stole the word, and decided to rush all-in making huge investments in all sort of random things that looked metaverse-y, just throwing in the trash the innovator’s dilemma and its solution.</span></p><p style="text-align: justify;"><span style="font-family: arial;">But if I told you that, hidden in plain sight, this idea of the metaverse is actually rather obvious, even mundane, and all you need to do is to sit down and observe what has been going on… with people.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>Trends in the gaming industry.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">I’m not the best person to wade through the philosophy and psychology of entertainment - how it is fundamentally social, interactive, and important.</span></p><p style="text-align: justify;"><span style="font-family: arial;">And neither I am, even in my field, a historian - so I won’t be presenting an accurate accounting of what happened in the past couple of decades.</span></p><p style="text-align: justify;"><span style="font-family: arial;">I hope the following will be mundane enough that it can be shown even through an imperfect lens, and for familiarity’s sake, I’ll use my own career as one.</span></p><p style="text-align: justify;"><span style="font-family: arial;">I have to warn you: this is going to be boring. All that I’m going to say, is obvious… it’s just that for some reason, I don’t see often all the dots being connected…</span></p><p style="text-align: justify;"><span style="font-family: arial;">Let’s go.</span></p><p style="text-align: justify;"><span style="font-family: arial;">I started working in the videogame industry in the early 2000s. The very tail end of the ps2 era (I never touched that console’s code - the closets I came was to modify some og xbox stuff we were using as we repurposed a rack of old consoles to help certain data bakes), right at the beginning of the 360 one.</span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><img alt="box art" class="lazy img-fluid boxart boxart-big loaded" data-sizes="20vw" data-src="/wp-content/uploads/2018/04/Evolution-GT.jpg" data-srcset="/wp-content/uploads/2018/04/Evolution-GT.jpg 460w, /wp-content/uploads/2018/04/Evolution-GT.jpg 920w" data-was-processed="true" sizes="20vw" src="https://milestone.it/wp-content/uploads/2018/04/Evolution-GT.jpg" srcset="https://milestone.it/wp-content/uploads/2018/04/Evolution-GT.jpg 460w, https://milestone.it/wp-content/uploads/2018/04/Evolution-GT.jpg 920w" style="margin-left: auto; margin-right: auto; text-align: start;" /></td></tr><tr><td class="tr-caption" style="text-align: center;">My first game (uncredited)</td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;">What were we doing? Boxed titles. Local, self-contained experiences. Yes, you could play split screen if you happened to have a friend nearby - and that’s incredibly fun, we are social animals after all… </span></p><p style="text-align: justify;"><span style="font-family: arial;">But all in all, you shipped a title, you pressed discs, people bought discs, inserted them in their console, played on the couch, rinse and repeat.</span></p><p style="text-align: justify;"><span style="font-family: arial;">I did a couple of these, then moved from Italy to Canada, to work for EA, a much bigger company, we’re around the middle of the 360/ps3 era now.</span></p><p style="text-align: justify;"><span style="font-family: arial;">What were we doing? Yeah, you guessed it, multiplayer titles. Single-player was still important, local multiplayer was still important, and we were still pressing discs… but we started to move towards a more connected idea of gaming. </span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><img alt="Is Fight Night Champion Good? Revisiting the Boxing Game 10 Years Later" class="n3VNCb pT0Scc KAlRDb" data-noaft="1" height="100" jsaction="load:XAeZkd;" jsname="HiaYvf" src="https://static1.colliderimages.com/wordpress/wp-content/uploads/2021/06/fight-night-champion-1.jpg" style="-webkit-user-drag: auto; -webkit-user-select: text; height: 317px; margin: 0px auto; text-align: start; user-select: text; width: 634px;" width="200" /></td></tr><tr><td class="tr-caption" style="text-align: center;">You know I'm still <a href="http://c0de517e.blogspot.com/2011/09/fight-night-champion-gdc.html">proud of the work</a> on this one...</td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;">We would do DLCs, and support the game longer post-shipping; Communities started to grow bigger as you could connect around a game.</span></p><p style="text-align: justify;"><span style="font-family: arial;">The game you got on disc was not that relevant anymore, was just a starting point, necessarily. There is no way to game-design something that will be played, concurrently, by millions of players. They will break your game, find balancing issues, and so on, so really, the game code was made to be infinitely tweakable, in “real-time” by people monitoring the community and making sure it kept being fun and challenging…</span></p><p style="text-align: justify;"><span style="font-family: arial;">Gaming has always been a community, with forums, magazines, TV shows, and such, but you start seeing all of that grow, people staying with a game longer, sequels to be more important, franchises over single titles…</span></p><p style="text-align: justify;"><span style="font-family: arial;">What’s next? </span></p><p style="text-align: justify;"><span style="font-family: arial;">For me, Ps4/Xbox one, Activision, Call of Duty… Where are we going? E-sports, twitch, youtube. A longer and longer tail of content. </span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><img alt="Modern Warfare 3 live action trailer brings Hollywood to Call of Duty » EFTM" class="n3VNCb pT0Scc KAlRDb" data-noaft="1" jsaction="load:XAeZkd;" jsname="HiaYvf" src="http://www.eftm.com/wp-content/uploads/2011/11/mw3-trailer-640x355.jpg" style="height: 351.672px; margin: 0px auto; text-align: start; width: 634px;" /></td></tr><tr><td class="tr-caption" style="text-align: center;"> I do miss the live-action, star-studded fun trailers COD used to make...</td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;">We go beyond tweaking the game post-launch, now, really the success of a game is measured in how well you keep providing interesting content, and interesting experiences with that framework you created.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Games as a service, we see the drop in physical game sales, the move to digital distribution - and with it, the boom of indie game making, of the idea that anyone can create and share.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Even big franchises, with their tight control over their IP, are nothing without the community of creators around them. Playstation “share” et al.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Call of duty is not simply the game that ships in a box, it’s a culture, it’s a scene - a persistent entity even way before it was a persistent gaming universe (only recently happening with WarZone).</span></p><p style="text-align: justify;"><span style="font-family: arial;">And then of course, I moved to Roblox, where I am now - and I guess I should have said somewhere, this is all personal - it’s my view of the industry, not connected with my job there and the company’s goals (Dave started from an educational tool, and from there crafted a vision that has always been quite unique, arguably the reason why now it ended up being ahead, clearer etc...). </span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><img alt="NoisyButters - YouTube" class="n3VNCb pT0Scc KAlRDb" data-noaft="1" jsaction="load:XAeZkd;" jsname="HiaYvf" src="https://i.ytimg.com/vi/q5NVp-zBBYQ/hqdefault.jpg?sqp=-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLAfBhX8a01jFCDhsQI19UhALPOH1A" style="-webkit-user-drag: auto; -webkit-user-select: text; height: 270px; margin: 9.45px auto; text-align: start; user-select: text; width: 480px;" /></td></tr><tr><td class="tr-caption" style="text-align: center;">I like the positivity of <a href="https://www.youtube.com/channel/UCd6dpR7BBeX4w4Tpa_5cKcQ">NoisyButters</a></td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;">Hopefully, you can see that my point here is more general than what this or that company wants to do...</span></p><p style="text-align: justify;"><span style="font-family: arial;">But again, I moved to Roblox, personally because I liked the idea to be closer to the creative side of the equation, but in general, where are we now? </span></p><p style="text-align: justify;"><span style="font-family: arial;">What’s the new wave of gaming? Fortnite? Minecraft? Among us? Tarkov? Diablo 4? Whatever, you see the trends:</span></p><p style="text-align: justify;"></p><ul><li><span style="font-family: arial;">Games are social, and encourage socialization, they are communities. Effectively, they are social networks, just as clubhouse, instagram, tiktok…</span></li><li><span style="font-family: arial;">There are user-created universes “around” the games, even when the game does not allow at all UGC.</span></li><li><span style="font-family: arial;">Games live or die based on the supply of content "flowing through" them. They are vehicles for content delivery.</span></li><li><span style="font-family: arial;">The in-game world and real world have continuous crossovers, brands, concerts, events, celebrations…</span></li></ul><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhE3aVDSuU7LLFUI9xD25Qf_ssmK_VlJ4ZGFHKc1Xu_2HscfkjMBLwMkpDiG1RTs3LYWI73lCao-pH-LJUgnapxWOW66FM21SVXTg4bWQZYVVYXWVta8uiqvAnjU1nKNQCBmzCCK4QBHPo2ULJUCJrL-kk8ZO26jsNOMFTdSLRj2IdS9t3H3ETIF_HMGQ" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="845" data-original-width="1220" height="222" src="https://blogger.googleusercontent.com/img/a/AVvXsEhE3aVDSuU7LLFUI9xD25Qf_ssmK_VlJ4ZGFHKc1Xu_2HscfkjMBLwMkpDiG1RTs3LYWI73lCao-pH-LJUgnapxWOW66FM21SVXTg4bWQZYVVYXWVta8uiqvAnjU1nKNQCBmzCCK4QBHPo2ULJUCJrL-kk8ZO26jsNOMFTdSLRj2IdS9t3H3ETIF_HMGQ" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Why do I play D3? For the transmog fashion of course!<br />And if you haven't played Fortnite / experienced its immense catalog of skins, you're missing out.</td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;"><b>Conclusions.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Yes, all of this has been true in some ways since forever, in a more underground fashion. </span></p><p style="text-align: justify;"><span style="font-family: arial;">MUDs and modding, Ultima Online and Warcraft, ARGs, and LARPing, I know - nothing's new under the sun. But this does not invalidate the idea, it reinforces it, everything that is mainstream today has been underground before...</span></p><p style="text-align: justify;"><span style="font-family: arial;">So, are we surprised that “the metaverse” matters? The idea of crafting the creative space, making a platform for creativity, having the social aspect built-in, to go beyond owning single IPs? To make the youtube of gaming, to merge creation, distribution, and communication? To allow people to create, instead of trying to cope with content demands by having everything in house, in a continuous death march that anyways will never match what communities can imagine?</span></p><p style="text-align: justify;"><span style="font-family: arial;">I have to admit, a lot of ideas I see in this space look incredibly dumb. The equation that the metaverse is AR/VR/XR, that is the holodeck or ready player one, whatever… and look, one day it might even be, in a time horizon that I really don’t care talking about.</span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><img alt="Innovation dies where monopolies thrive: why Meta is failing at metaverse | Cybernews" class="n3VNCb pT0Scc KAlRDb" data-noaft="1" jsaction="load:XAeZkd;" jsname="HiaYvf" src="https://media.cybernews.com/images/featured-big/2022/12/Mark-Zuckerberg-Facebook.jpg" style="height: 348.7px; margin: 0px auto; text-align: start; width: 634px;" /></td></tr><tr><td class="tr-caption" style="text-align: center;">:/</td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;">But today? Today is mundane, it’s an obvious space that does not need to be created, it’s already here, in products and trends, and will only evolve towards more integrated platforms and better products and so on - but it is anything but surprising. </span></p><p style="text-align: justify;"><span style="font-family: arial;">It’s not science fiction, it’s basic humanity wanting to connect and create.</span></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-83773972653120005212023-02-20T16:29:00.003-08:002024-01-17T11:04:41.285-08:00How to render it. Ten ideas to solve Computer Graphics problems.<p style="text-align: justify;"><span style="font-family: arial;"><b>Pre-Ramble.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">A decade ago or so, yours truly was a greener but enthusiastic computer engineer, working in production to make videogames look prettier. At a point, I had, in my naivety, an idea for a book about the field, and went* to a great mentor, with it.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>* messaged over I think MSN messenger. Could have been Skype, could have been ICQ, but I think it was MSN...</i></span></p><p style="text-align: justify;"><span style="font-family: arial;">He warned me about the amount of toiling required to write a book, and the meager rewards, so that, coupled with my inherent laziness, was the end of it.</span></p><p style="text-align: justify;"><span style="font-family: arial;">The mentor was a guy called Christer Ericson, who I had the fortune of working for later in life, and among many achievements, is the author of Real-time Collision Detection, still to this date, one of the best technical books I’ve read on any subject.</span></p><p style="text-align: justify;"><span style="font-family: arial;">The idea was to make a book not about specific solutions and technologies, but about conceptual tools that seemed to me at the time to be recurringly useful in my field (game engine development).</span></p><p style="text-align: justify;"><span style="font-family: arial;">He was right then, and I am definitely no less lazy now, so, you won’t get a book, but I thought, having accumulated a bit more experience, it might be interesting to meditate on what I’ve found in my career to be useful when it comes to innovation in real-time rendering.</span></p><p style="text-align: justify;"><span style="font-family: arial;">As we'll be talking about tools for innovation, the following is written assuming the reader has enough familiarity with the field - as such, it's perhaps a bit niche. <u>I'd love if others were to write similar posts about other industries though</u> - we have plenty of tools to generate ideas in creative fields, but I've seen fewer around (computer) science.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>The metatechniques.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">In no specific order, I’ll try to describe ten meta-ideas, tools for thought if you wish, and provide some notable examples of their application.</span></p><p style="text-align: justify;"></p><ol><li><span style="font-family: arial;">Use the right space.</span></li><li><span style="font-family: arial;">Data representation and its properties.</span></li><ul><li><span style="font-family: arial;">Consider the three main phases of computation.</span></li><li><span style="font-family: arial;">Consider the dual problem.</span></li></ul><li><span style="font-family: arial;">Compute over time.</span></li><li><span style="font-family: arial;">Think about the limitations of available data.</span></li><ol><li><span style="font-family: arial;">Machine learning as an upper limit.</span></li></ol><li><span style="font-family: arial;">The hierarchy of ground truths.</span></li><li><span style="font-family: arial;">Use computers to help along the way.</span></li><li><span style="font-family: arial;">Humans over math.</span></li><li><span style="font-family: arial;">Find good priors.</span></li><li><span style="font-family: arial;">Delve deep.</span></li><li><span style="font-family: arial;">Shortcut via proxies.</span></li></ol><p></p><p style="text-align: justify;"><span style="font-family: arial;">A good way to use these when solving a problem is to map out a design space, try to sketch solutions using a combination of different choices in each axis, and really try to imagine if it would work (i.e. on pen and paper, not going deep into implementation).</span></p><p style="text-align: justify;"><span style="font-family: arial;">Then, from this catalog of possibilities, select a few that are worth refining with some quick experiments, and so on and so forth, keep narrowing down while going deeper.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjRo5n2QGAAIOQ_8jxxeGuiakjfvp5wc0ErIAXB6qn3-R8mPypuNZLgqTalsaT54DEj65C0mbtOgyFS9FVEi8mnt8xJuSo8wwjuhXpjjP2ClbT3097mtA587HMe1vaXscTkOsDg-VQ7i7vq_hi6tst6tVdY12xzmuIlY062ZAF2-UEql5yiTNcgv_jx1g" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="1200" data-original-width="1600" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEjRo5n2QGAAIOQ_8jxxeGuiakjfvp5wc0ErIAXB6qn3-R8mPypuNZLgqTalsaT54DEj65C0mbtOgyFS9FVEi8mnt8xJuSo8wwjuhXpjjP2ClbT3097mtA587HMe1vaXscTkOsDg-VQ7i7vq_hi6tst6tVdY12xzmuIlY062ZAF2-UEql5yiTNcgv_jx1g" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Related post: <a href="https://c0de517e.blogspot.com/2015/03/design-optimization-landscape.html">Design optimization landscape</a></td></tr></tbody></table><span style="font-family: arial;"><br /></span><b style="font-family: arial;">1) Use the right space.</b><p></p><p style="text-align: justify;"><span style="font-family: arial;">Computer graphics problems can literally be solved from different perspectives, and each offers, typically, different tradeoffs.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Should I work in screen-space? Then I might have an easier time decoupling from scene complexity, and I will most likely work only on what’s visible, but that’s also the main downside (e.g. having to handle disocclusions and not being able to know what’s not in view). Should I work in world-space? In object-space? In “texture”-space, i.e. over a parametrization of the surfaces?</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>Examples:</i></span></p><p style="text-align: justify;"></p><ul><li><span style="font-family: arial;"><i><a href="https://web.archive.org/web/20090219082501/http://delivery.acm.org/10.1145/1290000/1281671/p97-mittring.pdf?key1=1281671&key2=9942678811&coll=ACM&dl=ACM&CFID=15151515&CFTOKEN=6184618">Obviously, SSAO</a>, which really opened the floodgates of screen-space techniques</i></span></li><li><span style="font-family: arial;"><i><a href="https://www.researchgate.net/publication/314796074_Realistic_human_face_rendering_for_The_Matrix_Reloaded">Realistic human face rendering for "The Matrix Reloaded"</a></i></span></li></ul><p></p><p style="text-align: justify;"><span style="font-family: arial;"><b>2) Data representation and its properties.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">This is a fundamental principle of computer science; different data structures have fundamentally different properties in terms of which operations they allow to be performed efficiently.</span></p><p style="text-align: justify;"><span style="font-family: arial;">And even if that’s such an obvious point, do you think systematically about it when exploring a problem in real-time rendering?</span></p><p style="text-align: justify;"><span style="font-family: arial;">List all the options and the relative properties. We might be working on signals on a hemisphere, what do we use? Spherical Harmonics? Spherical Gaussians? LTCs? A hemicube? Or we could map from the hemisphere to a circle, and from a circle to a square, to derive a two-dimensional parametrization, and so on.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Voxels or froxels? Vertices or textures? Meshes or point clouds? For any given problem, you can list probably at least a handful of fundamentally different data structures worth investigating.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>2B) Consider the three main phases of computation.</b></span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: arial;">Typically,</span></span><span style="font-family: arial;"> real-time rendering computation is divided into three: scene encoding, solver, and real-time retrieval. Ideally, we use the same data structure for all three, but it might be perfectly fine to consider different encodings for each.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>For example, let’s consider global illumination. We could voxelize the scene, then scatter light by walking the voxel data structure, say, employing voxel cone tracing, and finally utilize the data during rendering by directly sampling the voxels. We can even do everything in the same space, using world-space would be the most obvious choice, starting from using a compute 3D voxelizer over the scene. That would be fine. </i></span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>But nobody prohibits us to use different data structures in each step, and the end results might be faster. For example, we might want to take our screen-space depth and lift that to a world-space voxel data structure. We could (just spitballing here, not to mean it’s a good idea) generate probes with a voxel render, to approximate scattering. And finally, we could avoid sampling probes in real-time, by say, incrementally generating lightmaps (again, don’t take this as a serious idea).</i></span></p><p style="text-align: justify;"><span style="font-family: arial;"><i></i></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgLlR3IRC5Dfw_a9bzvFmKx1-xH7V1Sepb1_ye5rDzE07p033hXWG6KUYn97RiUQlFyxaZ8Tz__XG0NxILyOTrv18HnhYFnzDLcOpSttywlqRSVTXL4cFt2Gu_Sf946a1AHHGDDHiGJEuId62M0qiU0sFgnmhz-uc3fCJdyhElj5sIIvBxR1fs67JHVOA" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="234" data-original-width="480" height="156" src="https://blogger.googleusercontent.com/img/a/AVvXsEgLlR3IRC5Dfw_a9bzvFmKx1-xH7V1Sepb1_ye5rDzE07p033hXWG6KUYn97RiUQlFyxaZ8Tz__XG0NxILyOTrv18HnhYFnzDLcOpSttywlqRSVTXL4cFt2Gu_Sf946a1AHHGDDHiGJEuId62M0qiU0sFgnmhz-uc3fCJdyhElj5sIIvBxR1fs67JHVOA" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="https://www.researchgate.net/publication/47862111_Imperfect_Shadow_Maps_for_Efficient_Computation_of_Indirect_Illumination">Imperfect Shadow Maps are a neat example of thinking outside the box in terms of spaces and data structure to solve a problem...</a></td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;"><b>2C) Consider the dual problem.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">This is a special case of "the right data" and "the right space" - but it is common enough, and easy to overlook, so it gets a special mention.</span></p><p style="text-align: justify;"><span style="font-family: arial;">All problems have "duals", some in a very mathematical sense, others in a looser interpretation of the world. Often time, looking at these duals yields superior solutions, either because the dual is easier/better to solve, or because one can solve both, exploiting the strengths of each system</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>A simple example of rigorous duality is in the classic marching cubes algorithm, compared to the surface nets which operates on the dual grid of MC: surface nets are much easier and higher quality!</i></span></p><p style="text-align: justify;"><i><span style="font-family: arial;">A more interesting, more philosophical dual, is in the relationship between cells and portals for visibility "versus" occluders and bounding volumes. Think about it :)</span></i></p><p style="text-align: justify;"><span style="font-family: arial;"><b>3) Compute over time.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">This is a simple universal strategy to convert computationally hard problems into something amenable to real-time. Just don’t try to solve anything in a single frame, if it can be done over time, it probably should.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Incremental computation is powerful in many different ways. It exploits the fact that typically, a small percentage of the total data we have to deal with is in the working set.</span></p><p style="text-align: justify;"><span style="font-family: arial;">This is powerful because it is a universal truth of computing, not strictly a rendering idea (think about memory hierarchies, caches, and the cost of moving data around).</span></p><p style="text-align: justify;"><span style="font-family: arial;">Furthermore, it’s perceptually sound. Motion grabs our attention, and our vision system deals with deltas and gradients. So, we can get by with a less perfect solution if it is “hidden” by a bigger change.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Lastly, it is efficient computationally, because we deal with a very strict frame budget (we want to avoid jitter in the framerate) but an uneven computational load (not all frames take the same time). Incremental computation allows us to “fill” gaps in frames that are faster to compute, while still allowing us to end in time if a frame is more complex, by only adding lag to the given incremental algorithm. Thus, we can always utilize our computational resources fully.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>Obviously, <a href="https://de45xmedrsdbp.cloudfront.net/Resources/files/TemporalAA_small-59732822.pdf">TAA</a><b>,</b> but examples here are too numerous to give, it’s probably simpler to note how modern engines look like an abstract mess if one forces all the incremental algorithms to not re-use temporal information. It’s everywhere<b>. </b></i></span></p><p style="text-align: justify;"><span style="font-family: arial;"><i><b></b></i></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiG-pzYgIQiM3AcbUxN5QaXgWSJ4EN3Vl_iIdhSYAtRO39rp9O3rNUBkybJ-C2AUiPQNONigFKaRL8PxAU7cNy6fviaKvE2HdSLEvBng38faEuQOjZvHjZ9zVWwRPaUfWsfk30_UFXz5U1IhRRjUQKTrgaWfdwWJfW5kyoVDlednyCmHK-U4kEKs0-G1Q" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="844" data-original-width="1201" height="225" src="https://blogger.googleusercontent.com/img/a/AVvXsEiG-pzYgIQiM3AcbUxN5QaXgWSJ4EN3Vl_iIdhSYAtRO39rp9O3rNUBkybJ-C2AUiPQNONigFKaRL8PxAU7cNy6fviaKvE2HdSLEvBng38faEuQOjZvHjZ9zVWwRPaUfWsfk30_UFXz5U1IhRRjUQKTrgaWfdwWJfW5kyoVDlednyCmHK-U4kEKs0-G1Q" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="http://c0de517e.blogspot.com/2020/12/hallucinations-re-rendering-of.html">Parts of Cyberpunk 2077's specular lighting, without temporal </a></td></tr></tbody></table><span style="font-family: arial;"><i><b><br /></b></i></span><span style="font-family: arial;">It’s worth noting also that here I’m not just thinking of temporal reprojection, but all techniques that cache data over time, that update data over multiple frames, and that effectively result in different aspects of the rendering of a frame to operate at entirely decoupled frequencies.</span><p></p><p style="text-align: justify;"><i><span style="font-family: arial;">Take modern shadowmaps. Cascades are linked to the view-space frustum, but we might <a href="https://research.activision.com/publications/archives/sparse-shadow-trees">divide them into tiles</a> and <a href="https://advances.realtimerendering.com/s2012/insomniac/Acton-CSM_Scrolling(Siggraph2012).pdf">cache over frames</a>. Many games then throttle sun movements to happen mostly during camera motion, to hide recomputation artifacts. </span></i><i><span style="font-family: arial;">We might <a href="http://c0de517e.blogspot.com/2011/03/stable-cascaded-shadow-maps-ideas.html">update far cascades at different frequencies</a> than close ones and entirely bail out of updating tiles if we’re over a given frame budget. Finally, we might do shadowmap filtering using stochastic algorithms that are amortized among frames using reprojection.</span></i></p><p style="text-align: justify;"><span style="font-family: arial;"><b>4) Think about the limitations of available data.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">We made some choices, in the previous steps, now it’s time to forecast what results we could get.</span></p><p style="text-align: justify;"><span style="font-family: arial;">This is important in both directions, sometimes we underestimate what’s possible with the data that we can realistically compute in a real-time setting, other times we can “prove” that fundamentally we don’t have enough/the right data, and we need a perspective change.</span></p><p style="text-align: justify;"><span style="font-family: arial;">A good tool to think about this is to try a brute-force solution over our data structures, even if it wouldn’t be feasible in real-time, it would provide a sort of ground truth (more on this later): what’s the absolute best we could do with the data we have.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>Some examples, from my personal experience.</i></span></p><p style="text-align: justify;"></p><ul><li><span style="font-family: arial;"><i>When Crysis came out I was working at a company called Milestone, and I remember Riccardo Minervino, one of our technical artists, dumping the textures in the game, from which we “discovered” something that looked like AO, but looked like it was done in screen-space. What sorcery was that, we were puzzled and amazed. It took though less than a day, unconsciously following some of the lines of thought I’m writing about now, for me to guess that it must have been done with the depth buffer, and from there, that one could try to “simply” raytrace the depth buffer, taking inspiration from <a href="https://en.wikipedia.org/wiki/Relief_mapping_(computer_graphics)">relief mapping</a><b>.</b></i></span></li><li><span style="font-family: arial;"><i>This ended up not being the actual technique used by Crytek (raymarching is way too slow), but it was even back in the day an example of “best that can be done with the data available” - and when Jorge and I were working on <a href="https://www.activision.com/cdn/research/Practical_Real_Time_Strategies_for_Accurate_Indirect_Occlusion_NEW%20VERSION_COLOR.pdf">GTAO</a>, one thing that we had as a reference was a raymarched AO that Jorge wrote using only the depth data.</i></span></li><li><span style="font-family: arial;"><i>Similarly, I’ve used this technique a lot when thinking of other screen-space techniques, because these have obvious limitations in terms of available data. Depth-of-field and motion-blur are an example, where even if I never wrote an actual brute-force-with-limited-information solution, I keep that in the back of my mind. I know that the “best” solution would be to scatter (e.g. the <a href="https://bartwronski.com/2014/04/07/bokeh-depth-of-field-going-insane-part-1/">DOF particles approach</a>, first seen in Lost Planet, which FWIW on compute nowadays could be more sane), I know that’s too slow (at least, it was when I was doing these things) and that I had to “gather” instead, but to understand the correctness of the gather, you can think of what you’re missing (if anything) comparing to the more correct, but slower, solution.</i></span></li></ul><p></p><p style="text-align: justify;"><span style="font-family: arial;"><b>4B) Machine learning as an upper limit.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">The only caveat here is that in many cases the true “best possible” solution goes beyond algorithmic brute force, and instead couples that with some inference. I.e. we don’t have the data we’d like, but can we “guess”? That guessing is the realm of heuristics. </span></p><p style="text-align: justify;"><span style="font-family: arial;">Lately, the ubiquity of ML opened up an interesting option: to use machine learning as a proxy to validate the “goodness” of data.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>For example, in SSAO a typical artifact we get is dark silhouettes around characters, as depth discontinuities are equivalent to long “walls” when interpreting the depth buffer naively (i.e. as a heightfield). But we know that’s bad, and any <u>competent</u> SSAO (or SSR, etc) employs some heuristic to assign some “thickness” to the data in the depth buffer (at least, virtually) to allow rays to pass behind certain objects. That heuristic is a guessing game, how do we know how well we could do? There, training a ML model with ground truth, raytraced AO, and feeding it only the depth-buffer as inputs, can give us an idea of the best we could ever do, even if we are not going to deploy the ML model in real-time, at all.</i></span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>See also: <a href="https://research.nvidia.com/publication/2016-06_deep-g-buffers-stable-global-illumination-approximation">Deep G-Buffers for GI</a></i></span><span style="text-align: left;"><span style="font-family: arial;"><i> but remember, here I'm specifically talking about ML as proof of feasibility, not as the final technique to deploy.</i></span></span></p><p style="text-align: justify;"><b style="font-family: arial;">5) The hierarchy of ground truths.</b></p><p style="text-align: justify;"><span style="font-family: arial;">The beauty of rendering is that we can pretty much express all our problems in a single equation, we all know it, Kajiya’s Rendering Equation.</span></p><p style="text-align: justify;"><span style="font-family: arial;">From there on, everything is really about making the solution practical, that’s all there is to our job. But we should never forget that the “impractical” solution is great for reference, to understand where our errors are, and to bound the limits of what can be done.</span></p><p style="text-align: justify;"><span style="font-family: arial;">But what is the “true” ground truth? In practice, we should think of a hierarchy. </span></p><p style="text-align: justify;"><span style="font-family: arial;">At the top, well, there is reality itself, that we can probe with cameras and other means of acquisition. Then, we start layering assumptions and models, even the almighty Rendering Equation already makes many, e.g. we operate under the model of geometrical optics, which has its own assumptions, and even there we don’t take the “full” model, we typically narrow it further down: we discard spectral dependencies, we simplify scattering models and so on.</span></p><p style="text-align: justify;"><span style="font-family: arial;">At the very least, we typically have four levels. First, it’s reality.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Second, is the outermost theoretical model, this is a problem-independent one we just assume for rendering in general, i.e. the flavor of rendering equation, scene representation, material modeling, color spaces, etc we work in.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Then, there is often a further model that we assume true for the specific problem at hand, say, we are Ambient Occlusion, that entire notion of AO being “a thing” is its own simplification of the rendering equation, and certain quality issues stem simply from having made that assumption.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Lastly, there is all that we talked about in the previous point, namely, a further assumption that we can only work with a given subset of data.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Often innovation comes by noticing that some of the assumptions we made along the way were wrong, we simply were ignoring parts of reality that we should not have, that make a perceptual difference. </span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>What good is it to find a super accurate solution to say, the integral of spherical diffuse lights with Phong shading, if these lights and that shading never exist in the real world? It’s sobering to look back at how often we made these mistakes (and our artists complained that they could not work well with the provided math, and needed more controls, only for us to notice that fundamentally, the model was wrong - point lights anybody?)</i></span></p><p style="text-align: justify;"><span style="font-family: arial;">Other times, the ground truth is useful only to understand our mistakes, to validate our code, or as a basis for prototyping.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>6) Use computers to help along the way.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">No, I’m not talking about ChatGPT here.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Numerical optimization, dimensionality reduction, data visualization - in general, we can couple analytic techniques with data exploration, sometimes with surprising results.</span></p><p style="text-align: justify;"><span style="font-family: arial;">The first, more obvious observation, is that in general, we know our problems are not solvable in closed form, we know this directly from the rendering equation, this is all theory that should be so ingrained I won’t repeat it, our integral is recursive, its form even has a name, we know we can’t solve it, we know we can employ numerical techniques and blah blah blah path tracing.</span></p><p style="text-align: justify;"><span style="font-family: arial;">This is not very interesting per se, as we never directly deal with Kajiya’s in real-time, we layer assumptions that make our problem simpler, and we divide it into a myriad of sub-problems, and many of these do indeed have closed-form solutions.</span></p><p style="text-align: justify;"><span style="font-family: arial;">But even in these cases, we might want to further approximate, for performance. Or we might notice that an approximate solution (as in function approximation or look-up tables) with a better model is superior to an exact solution with a more stringent one.</span></p><p style="text-align: justify;"><span style="font-family: arial;">But there is a second layer where computers help, which is to inform our exploration of the problem domain. Working with data, and interacting with it, aids discovery. </span></p><p style="text-align: justify;"><span style="font-family: arial;">We might visualize a signal and notice it resembles a known function (or that by applying a given transform we improve the visualization) - leading to deductions about the nature of the problem, sometimes that we can reconduct directly to analytic or geometric insights. We might observe that certain variables are not strongly correlated with a given outcome, again, allowing us to understand what matters. Or we might do dimensionality reduction, clustering, and understand that there might be different sub-problems that are worth separating.</span></p><p style="text-align: justify;"><span style="font-family: arial;">To the extreme, we can employ symbolic regression, to try to use brute force computer exploration and have it "tell us" directly what it found. </span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>Examples. This is more about methodology, and I can't know how much other researchers leverage the same methods, but in the years I've written multiple times about these themes:</i></span></p><p style="text-align: justify;"></p><ul><li><span style="font-family: arial;"><i>This <a href="http://c0de517e.blogspot.com/2016/07/siggraph-2015-notes-for-approximate.html">Siggraph talk</a></i></span></li><li><span style="font-family: arial;"><i><a href="http://c0de517e.blogspot.com/2014/06/stuff-that-every-programmer-should-know.html">Data visualization 101</a></i></span></li></ul><div><span style="font-family: arial;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgNaAIdy6MEe0TZ26aa7hJGZ66wafrwZrTq7WhTjo8GZlQTeUWYXqqEY2fQ4drMOsU4ZG53lQ6KdRsCVtFxZbAs8xZh6VKFLJ1TCc3hTeQSdSneaddsV456YE7R0r9WNYv2Xepj6wjTUAznMIwSHDYxB9uytst8FRHWi5YzK6R7-cn690rRYkMfEv9o9g" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="1218" data-original-width="1214" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEgNaAIdy6MEe0TZ26aa7hJGZ66wafrwZrTq7WhTjo8GZlQTeUWYXqqEY2fQ4drMOsU4ZG53lQ6KdRsCVtFxZbAs8xZh6VKFLJ1TCc3hTeQSdSneaddsV456YE7R0r9WNYv2Xepj6wjTUAznMIwSHDYxB9uytst8FRHWi5YzK6R7-cn690rRYkMfEv9o9g" width="239" /></a></div><br /></span></div><ul><li><span style="font-family: arial;"><i>Some <a href="http://c0de517e.blogspot.com/2017/11/datalog.html">horrible</a> old <a href="http://c0de517e.blogspot.com/2013/05/peeknpoke.html">tools</a> I wrote to connect live programs to data vis</i></span></li></ul><p></p><p style="text-align: justify;"><span style="font-family: arial;"><b>7) Humans over math.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">One of the biggest sins of computer engineering in general is not thinking about people, processes, and products. It's the reason why tech fails, companies fail, and careers "fail"... and it definitely extends to research as well, especially if you want to see it applied.</span></p><p style="text-align: justify;"><span style="font-family: arial;">In computer graphics, this manifests in two main issues. First, there is the simple case of forgetting about perceptual error measures. This is a capital sin both in papers (technique X is better by Y by this % MSE) and data-driven results (visualization, approximation...). </span></p><p style="text-align: justify;"><span style="font-family: arial;">The most obvious issue is to just use mean squared error (a.k.a. L2) everywhere, but often times things can be a bit trickier, as we might seek to improve a specific element in a processing chain that delivers pixels to our faces, and we too often just measure errors in that element alone, discounting the rest of the pipeline which would induce obvious nonlinearities.</span></p><p style="text-align: justify;"><span style="font-family: arial;">In these cases, sometimes we can just measure the error at the end of the pipeline (e.g. on test scenes), and other times we can approximate/mock the parts we don't explicitly consider.</span></p><p style="text-align: justify;"><span style="font-family: arial;">As an example, if we are approximating a given integral of a luminaire with a set BRDF model, we should probably consider that the results would go through a tone mapper, and if we don't want to use a specific one (which might not be wise, especially because that would probably depend on the exposure), we can at least account for the roughly logarithmic nature of human vision...</span></p><p style="text-align: justify;"><span style="font-family: arial;">Note that a variant of this issue is to use the wrong dataset when computing errors or optimizing, for example, one might test a new GPU texture compression method over natural image datasets, while the important use case might be source textures, that have significantly different statistics. All these are subtle mistakes that can cause large errors (and thus, also, the ability to innovate by fixing them...)</span></p><p style="text-align: justify;"><span style="font-family: arial;">The second category of sins is to forget whose life we are trying to improve - namely, the artist's and the end user's. Is a piece of better math useful at all, if nobody can see it? Are you overthinking PBR, or focusing on the wrong parts of the imagining pipeline? What matters for image quality? </span></p><p style="text-align: justify;"><span style="font-family: arial;">In most cases, the answer would be "the ability of artists to iterate" - and that is something very specific to a given product and production pipeline. </span></p><p style="text-align: justify;"><span style="font-family: arial;">If you can spend more time with artists, as a computer graphics engineer, you should. </span></p><p style="text-align: justify;"><span style="font-family: arial;">Nowadays unfortunately productions are so large that this tight collaboration is often unfeasible, artists dwarf engineers by orders of magnitude, and thus we often create some tools with few data points, release them "in the wild" of a production process, where they might or might not be used in the ways they were "supposed" to. Even the misuse is very informative. </span></p><p style="text-align: justify;"><span style="font-family: arial;">We should always remember that artists have the best eyes, they are our connection to the end users, to the product, and the ones we should trust. If they see something wrong, it probably is. It is our job to figure out why, where, and how to fix it, all these dimensions are part of the researcher's job, but the hint at what is wrong, comes first from art. </span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>An anecdote I often refer to, because I lived through these days, is when artists had only point lights, and some demanded that lights carried modifiers to the roughness of surfaces they hit. I think this might have even been in the OpenGL fixed function lighting model, but don't quote me there. Well, most of us laughed (and might laugh today, if not paying attention) at the silliness of the request. Only to be humbled by the invention of "roughness modification" as an approximation to area lights...</i></span></p><p style="text-align: justify;"><span style="font-family: arial;">Here is where I should also mention the idea of taking inspiration from other fields, this is true in general and almost to the same level as suggestions like "take a walk to find solutions" or "talk to other people about the problem you're trying to solve" - i.e. good advice that I didn't feel was specific enough. We know that creativity is the recombination of ideas, and that being a good mix of "deep/vertical" and "wide/horizontal" is important... in life. </span></p><p style="text-align: justify;"><span style="font-family: arial;">But specifically, here I want to mention the importance of looking at our immediate neighbors: know about art, its tools and language, know about offline rendering, movies, and visual effects, as they can often "predict" where we will go, or as we can re-use their old techniques, look at acquisition and scanning techniques, to understand the deeper nature of certain objects, look at photography and movie making. </span></p><p style="text-align: justify;"><span style="font-family: arial;">When we think about inspiration, we sometimes limit ourselves to related fields in computer science, but a lot of it comes from entirely different professions, again, humans.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>See also:</i></span></p><p style="text-align: justify;"></p><ul><li><span style="font-family: arial;"><a href="http://c0de517e.blogspot.com/2019/05/seeing-whole-physically-based-picture.html"><i><b>Seeing the whole PBR picture</b></i></a> <i>and parts of <a href="http://c0de517e.blogspot.com/2022/06/real-time-rendering-past-present-and.html">this talk at Intel</a></i></span></li><li><span style="font-family: arial;"><a href="http://c0de517e.blogspot.com/2011/09/fight-night-champion-gdc.html"><i>My notes on the rendering research in Fight Night Champion</i></a></span></li><li><span style="font-family: arial;"><a href="https://renderwonk.com/publications/i3d2013-keynote/"><i>Naty's "outside the echo chamber" talk</i></a></span></li><li><span style="font-family: arial;"><i>Looking at the lighting of a movie set it instructive, I often reference <a href="http://www.gregorycrewdsonmovie.com/">Gregory Crewdson</a> (who uses similar techniques in photography)</i></span></li><li><span style="font-family: arial;"><i>"Kids these days" might think that AO has always been a real-time idea, but it actually <a href="https://www.semanticscholar.org/paper/Production-Ready-Global-Illumination-Landis/4a9de79235445fdf346b274603dfa5447321aab6">comes from movies</a> - including bent normals, in fact, the first time I personally coded bent normals, was because of Landis paper.</i></span></li></ul><p></p><p style="text-align: justify;"><span style="font-family: arial;"><b>8) Find good priors.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">A fancy term for assumptions, but here I am specifically thinking of statistics over the inputs of a given problem, not simplifying assumptions over the physics involved. It is often the case in computer graphics that we cannot solve a given problem in general, literally, it is theoretically not solvable. But it can become even easy to solve once we notice that not all inputs are equally likely to be present in natural scenes.</span></p><p style="text-align: justify;"><span style="font-family: arial;">This is the key assumption in most image processing problems, upsampling, denoising, inpainting, de-blurring, and so on. In general, images are made of pixels, and any configuration of pixels is an image. But out of this gigantic space (width x height x color channels = dimensions), only a small set comprises images that make any sense at all, most of the space is occupied, literally, by random crap.</span></p><p style="text-align: justify;"><span style="font-family: arial;">If we have some assumption over what configurations of pixels are more likely, then we can solve problems. For example, in general upsampling has no solution, downsampling is a lossy process, and there is no reason for us to prefer a given upsampled version to another where both would generate the same downsampled results... until we assume a prior. By hand and logic, we can prioritize edges in an image, or gradients, and from there we get all the edge-aware upsampling algorithms we know (or might google). If we can assume more, say, that the images are about faces or text and so on, we can create truly miraculous (hallucination) techniques.</span></p><p style="text-align: justify;"><span style="font-family: arial;">As an aside, specifically for images, this is why deep learning is so powerful - we know that there is a tiny subspace of all possible random pixels that are part of naturally occurring images, but we have a hard time expressing that space by handcrafted rules. So, machine learning comes to the rescue.</span></p><p style="text-align: justify;"><span style="font-family: arial;">This idea though applies to many other domains, not just images. Convolutions are everywhere, sparse signals are everywhere, and noise is everywhere, all these domains can benefit from adopting priors. </span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>E.g. we might know about radiance in parts of the scene only through some diffuse irradiance probes (e.g. spherical harmonics). Can we hallucinate something for specular lighting? In general, no, in practice, probably. We might assume that lighting is likely to come from a compact set of directions (a single dominant luminaire). Often times is even powerful to assume that lighting comes mostly from the top down, in most natural scenes - e.g. bias AO towards the ground...</i></span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>See also: <a href="http://c0de517e.blogspot.com/2013/12/enhance-this.html">an old rant on super-resolution ignorance</a></i></span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>9) Delve deep.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">This is time-consuming and can be annoying, but it is also one of the reasons why it's a powerful technique. Keep asking "why". Most of what we do, and I feel this applies outside computer graphics as well, is habit or worse, hype. And it makes sense for this to be the case, we simply do not have the time to question every choice, check every assumption, and find all the sources when our products and problems keep growing in complexity.</span></p><p style="text-align: justify;"><span style="font-family: arial;">But for a researcher, it is also a land of opportunity. Often times we can even today, pick a topic at random, a piece of our pipeline, and by simply keep questioning it we'll find fundamental flaws that when corrected yield considerable benefits. </span></p><p style="text-align: justify;"><span style="font-family: arial;">This is either because of mistakes (they happen), because of changes in assumptions (i.e. what was true in the sixties when a given piece of math was made, is not true today), because we ignored the assumptions (i.e. the original authors knew a given thing was applicable only in a specific context, but we forgot about it), or because we plugged in a given piece of math/algorithm/technology and added large errors while doing it.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>A simple example: most integrals of lighting and BRDFs with normalmaps, which cause the hemisphere of incoming light directions to partially be occluded by the surface geometry. We clearly have to take that horizon occlusion into consideration, but we often do not, or if we do it's through quick hacks that were never validated. Or how we use Cook-Torrance-based BRDFs, without remembering that they are valid only up to a given surface smoothness. Or how nobody really knows what to do with colors (What's the right space for lighting? For albedos? To do computation? We put sRGB primaries over everything and call it a day...). But again, this is everywhere, if one has the patience of delving...</i></span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>10) Shortcut via proxies.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Lastly, this one is a bit different than all the others, in a way a bit more "meta". It is not a technique to find ideas, but one to accelerate the development process of ideas, and it is about creating mockups and prototypes as often as possible, as cheaply as possible.</span></p><p style="text-align: justify;"><span style="font-family: arial;">We should always think - can I mock this up quickly using some shortcut? Especially where it matters, that's to say, around unknowns and uncertain areas. Can I use an offline path tracer, and create a scene that proves what my intuition is telling me? Perhaps for that specific phenomenon, the most important thing is, I don't know, the accuracy of the specular reflection, or the influence of subsurface scattering, or modeling lights a given way...</span></p><p style="text-align: justify;"><span style="font-family: arial;">Can I prove my ideas in two dimensions? Can I use some other engine that is more amenable to live coding and experimentation? Can I modify a plug-in? Can I create a static scene that previews the performance implication of something that I know will need to generate dynamically - say a terrain system or a vegetation system.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Can I do something with pen and paper? Can I build a physical model? Can I gather data from other games, from photos, from acquiring real-world data... Can I leverage tech artists to create mocks? Can I create artificial loads to investigate the performance, on hardware, of certain choices?</span></p><p style="text-align: justify;"><span style="font-family: arial;">Any way you have to write less code, take less time, and answer a question that allows you to explore the solution space faster, is absolutely a priority when it comes to innovation. Our design space is huge! It's unwise to put prematurely all the chips on a given solution, but it is equally unwise to spend too long exploring wide, so the efficiency of the exploration is paramount, in practice.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>- Ending rant.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">I hope you enjoyed this. In some ways I realize it's a retrospective that I write now as I've, if not closed, at least paused the part of my career that was mostly about graphics research, to learn about other areas that I have <a href="http://c0de517e.blogspot.com/2019/04/how-to-choose-your-next-job-why-i-went.html">not seen before</a>.</span></p><p style="text-align: justify;"><span style="font-family: arial;">It's almost like these youtube ads where people peddle free pamphlets on ten easy steps to become rich with Amazon etc, minus the scam (trust me...). Knowledge sharing is the best pyramid scheme! The more other people innovate, the more I can (have the people who actually write code these days) copy and paste solutions :)</span></p><p style="text-align: justify;"><span style="font-family: arial;">I also like to note how this list could be called "anti-design patterns" (not anti-patterns, which are still patterns), the opposite of DPs in the sense that I hope for these to be starting points for ideas generation, to apply your minds in a creative process, while DPs (ala GoF) are prescribed (terrible) "solutions" meant to be blindly applied. </span></p><p style="text-align: justify;"><span style="font-family: arial;">I probably should not even mention them because at least in my industry, they are finally effectively dead, after a phase of hype (unfortunately, we are often too mindless in general) - but hey, if I can have one last stab... why not :)</span></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com1tag:blogger.com,1999:blog-6950833531562942289.post-20944612376854620542022-06-26T17:25:00.001-07:002022-06-29T19:07:23.026-07:00Machines Arose<p><i><span style="font-family: arial;">The era of algorithmic slavery.</span></i></p><p><span style="font-family: arial;">When we think of the rise of the machines, we picture skynet and the matrix. Humanity literally fighting the AI, with big pew-pew guns, and getting enslaved by it. </span><span style="font-family: arial;">Heroes seeing through the deception, illuminated minds, perhaps looking insane to the average bystander, purposed with a higher calling.</span></p><p><span style="font-family: arial;">We lose ourselves in the bombast of Hollywood, we take metaphors literally, we fear or dream of the singularity, look for signs of consciousness in the code we write.</span></p><p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEj8an1mrQE9loPTJnpOObJ_NWQ5mzd1iALhkRBmdg8BFdCVMgOXkq8daAtdlS5eQIzUOM_yxMNAKBOAIM60PCIDXA4kJb746UuypbJ8CwBfMN2abaEsVANwJ3YRAQXoeXdQyw90AWqyIqx4_KLKMAZg1ijs8CqgWOaj0w3TOENl_NOGAKv8gQlJ4vL7DA" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="360" data-original-width="480" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEj8an1mrQE9loPTJnpOObJ_NWQ5mzd1iALhkRBmdg8BFdCVMgOXkq8daAtdlS5eQIzUOM_yxMNAKBOAIM60PCIDXA4kJb746UuypbJ8CwBfMN2abaEsVANwJ3YRAQXoeXdQyw90AWqyIqx4_KLKMAZg1ijs8CqgWOaj0w3TOENl_NOGAKv8gQlJ4vL7DA" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: arial;">A lesser known game from Bethesda... 2029 is close!</span></td></tr></tbody></table></p><p><span style="font-family: arial;">In reality, the danger is the opposite. It’s not how much consciousness the machines gain. It is rather, how much they remove from us. </span><span style="font-family: arial;">Yes, the recent <a href="https://www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine/">"LaMDA is sentient" BS</a> is not much more than a bad publicity stunt - but that doesn't mean that Google is not scary!</span></p><p><span style="font-family: arial;">We are of course already dependent on machines - that is not the problem - our degree of attachment to them. We are dependent on all technology we create. It’s the defining feature of humanity to better itself through technology, it has been true since we made fire.</span></p><p><span style="font-family: arial;">For millennia we have used technology to elevate ourselves, to free us from the minutiae of living and sublimate our spirit, enabling higher forms of creativity, allowing us to dedicate more time to work that is intellectual in nature.</span></p><span style="font-family: arial;">You can call this productivity, even augmented intelligence - once we discovered that technology is not good simply to ease physical labor, but can be shaped into tools for better thinking.</span><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgPzb3cPdfUSOHFEiW_eudR7HLrTUf06FaTqP2uo3bnSAjJA_cITQEiBAYamzKQzLeWdWDVjplYtM10vjXrharNnTyo8rgTB6DmtgzDoaNdf1zRsTjKVBYKQnGKI7_AI-rHa5kaMCFeINUJg957G7BWWqPvIu47NcpvG_6V3of-Szcj5ZSec8JXALKATA" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="1549" data-original-width="3000" height="165" src="https://blogger.googleusercontent.com/img/a/AVvXsEgPzb3cPdfUSOHFEiW_eudR7HLrTUf06FaTqP2uo3bnSAjJA_cITQEiBAYamzKQzLeWdWDVjplYtM10vjXrharNnTyo8rgTB6DmtgzDoaNdf1zRsTjKVBYKQnGKI7_AI-rHa5kaMCFeINUJg957G7BWWqPvIu47NcpvG_6V3of-Szcj5ZSec8JXALKATA" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Sougwen Chung (愫君) - Machines can be tools that augment our creativity. </td></tr></tbody></table><br /></span><div><span style="font-family: arial;">Is this trend going to end one day? Is is already ending?</span></div><div><p><span style="font-family: arial;">Will we live in a world where it’s increasingly hard to be a value-add via the use of technology, but rather most of us will be made irrelevant by it? What happens to the masses that can’t produce anything of interest?</span></p><p><span style="font-family: arial;">Can our creativity outpace the machine’s forever? </span></p><p><span style="font-family: arial;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEh75Kjx0HLCWrapvzntGIfOmOtmn4Ir6SAxP5Z7Vsakdk27jlL33gMdg-TuawtT6Hx5UcGGOXODa59oqY6HbKSzZ5dsEGOl1U0uKY_b1FNwXL6ohQIB633h4gD8p8XhCiTSDwHPw4LwK83kAPnQ76nQPfEy_LbnogJf4zjrozRZovu5OXlW5aFd2up-vA" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="142" data-original-width="311" height="146" src="https://blogger.googleusercontent.com/img/a/AVvXsEh75Kjx0HLCWrapvzntGIfOmOtmn4Ir6SAxP5Z7Vsakdk27jlL33gMdg-TuawtT6Hx5UcGGOXODa59oqY6HbKSzZ5dsEGOl1U0uKY_b1FNwXL6ohQIB633h4gD8p8XhCiTSDwHPw4LwK83kAPnQ76nQPfEy_LbnogJf4zjrozRZovu5OXlW5aFd2up-vA" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="https://www.youtube.com/watch?v=g9Z0pqsCUhY">https://www.youtube.com/watch?v=g9Z0pqsCUhY</a></td></tr></tbody></table><br />One can argue that a tool remains a tool, and in the history of the world, short-sighted people always lamented when creation became more accessible, from painting to film photography, from film to digital cameras, from cameras to smartphones. </span></p><p><span style="font-family: arial;">There is always someone lamenting the loss of "true" art - and they are always wrong... but! At the same time, we have enough historical evidence of machines displacing jobs, labor having to learn new skills, often painfully, for the generations caught in the transition. </span></p><p><span style="font-family: arial;">There is some reason to worry, then - </span><span style="font-family: arial;">but it's not the key to the story here. Creativity is likely to remain firmly in the domain of humans, in fact one could say that a truly creative machine would need to be a conscious one, and that is not the scenario I'm interesting in.</span></p><p><span style="font-family: arial;">The danger is subtler, closer and more real. </span></p><p><span style="font-family: arial;">Do we already live in a world where we many creators are replaceable slaves, being milked for content by algorithms that are the true holders of value?</span></p><p><span style="font-family: arial;"></span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjDT8BDrOw0gKBHJyqJOj7WMUuJVv4vllYIlNoS6v2pCvAnbPTtb61E_gf0Xk_UaSQFjxr0sEePhJV7ODttQZ-FCRJjFycjCJ2s-LAlhr7G4wIBoF13RcXTpieEqzdtrdtyHhvUJbY6WM_vmzowA06xmYjhlt5SzZlJ_7MFtlTFLKh0iWj3ZQSHdzJOQQ" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="476" data-original-width="572" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEjDT8BDrOw0gKBHJyqJOj7WMUuJVv4vllYIlNoS6v2pCvAnbPTtb61E_gf0Xk_UaSQFjxr0sEePhJV7ODttQZ-FCRJjFycjCJ2s-LAlhr7G4wIBoF13RcXTpieEqzdtrdtyHhvUJbY6WM_vmzowA06xmYjhlt5SzZlJ_7MFtlTFLKh0iWj3ZQSHdzJOQQ" width="288" /></a></div><br /><span style="font-family: arial;">AIs feed us during most of our days. Shodan's tools are videos of kittens, dogs and babies. And her minions are willingly joining, hoping for visibility and connection. </span><p></p><p><span style="font-family: arial;">It's a marvelous machine that exploits the brain chemistry of consumers with cheap dopamine, and of creators, as we seek to show our photos and videos for follows, we increasingly define our value in society by the number of likes we get.</span></p><p><span style="font-family: arial;">How conscious are we, when most of our connections are software mediated, and sentiment analyzed? </span><span style="font-family: arial;">The algorithm does not know when to stop, and neither do our brains. Dopamine is the AI’s sugar.</span></p><p><span style="font-family: arial;">We do not need to be intubated, in pods, to be enslaved. We don't even need to be slaves, once we created a system that gives some short-term pleasure, <a href="https://www.newyorker.com/culture/infinite-scroll/how-the-internet-turned-us-into-content-machines">we willingly subjugate to it</a>.</span></p><p><span style="font-family: arial;">Don’t take your science fiction literally.</span></p><p><span style="font-family: arial;">I don't fear the sentient AI and the singularity. I don't care much about privacy and crypto-anarchism. I think we are looking at the wrong problems. Even the worries about physicals changes in our <a href="https://onlinelibrary.wiley.com/doi/10.1002/wps.20617">cognitive abilities</a>, psychology and <a href="https://www.newyorker.com/culture/decade-in-review/the-age-of-instagram-face">looks</a> might be overstated - as we are very plastic, we adapt.</span></p><p><span style="font-family: arial;">And for how despicable the role of simplistic recommendation algorithms, shares and likes have on creating <a href="https://medium.com/swlh/how-persuasive-algorithms-drive-political-polarization-75819854c11d">information bubbles and drive polarization</a>, we are beginning to understand and rebel - systems might be tuned differently...</span></p><p><span style="font-family: arial;">The existence of a system though, per se, and the fact that can be tuned - is that ever possibly moral? Are we not saying that we are losing agency, if the way a machine operates controls society?</span></p><p><span style="font-family: arial;">This is a Silicon Valley problem that SV cannot solve for itself. It's the natural evolution of companies to want to be successful, and we are in a world where success means engaging billions of people, capturing a large percent of their time and attention.</span></p><p><span style="font-family: arial;">These systems can hardly be called tools, and are clearly not in our control.</span></p></div></div>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-41098659475573056282022-06-11T18:57:00.008-07:002023-02-21T01:40:18.934-08:00Real-time rendering - past, present and a probable future.<p><span style="font-family: arial;">This presentation was a keynote given to a private company event - I'm not sure if I'm at liberty to say more about it - but the content is quite universal, so I hope you'll enjoy!</span></p><p><span style="font-family: arial;">It does not talk directly of Roblox or the Metaverse... but at the same time, it has, near the end, some strong connections to it.</span></p><p style="text-align: center;"><span style="font-family: arial;"><b><a href="https://www.dropbox.com/s/xmakzggd7py1xvp/PPPfuture_PDF.pdf?dl=0">Slides here</a>!</b></span></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiXzSxoB7Ix-LjM91Icyc2LyyzhONmxViOgg3L3G9Xi7JjdOfkuJhR30m3BKFlQ-rvgu-EaeINJ8m2yGz3T1t5DSaJk5bVSgdF2LtMaNJmi8M8-SJirOGvlmFlSG7CbqVoTjky3COhIxVlxi04V7Pn4N4lL8why2-2oaUM-mqYPgtO2j403NSsnJ2iy8g" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="554" data-original-width="1000" height="177" src="https://blogger.googleusercontent.com/img/a/AVvXsEiXzSxoB7Ix-LjM91Icyc2LyyzhONmxViOgg3L3G9Xi7JjdOfkuJhR30m3BKFlQ-rvgu-EaeINJ8m2yGz3T1t5DSaJk5bVSgdF2LtMaNJmi8M8-SJirOGvlmFlSG7CbqVoTjky3COhIxVlxi04V7Pn4N4lL8why2-2oaUM-mqYPgtO2j403NSsnJ2iy8g" width="320" /></a></div><p></p><p><span style="font-family: arial;">Also... this is not the first "open problems" slide deck I make, and I mentioned an unfinished one in previous presentations... I realize I will never finish it - or rather, I am not as passionate about it anymore, so... here it is, frozen in its eternal WIP state: <a href="https://www.dropbox.com/s/jkyiizjbg51u403/Open_Problems_WIP.pdf?dl=0"><b>slides</b></a> - circa 2015</span></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjpMxo7tVZu4Ievc648Pok1IO3dUrg-AkUHFfPQPekRWn0DxAO6MEl4VWe-uBzx-2qkvGBqQ6_u0jmHeb3bTvL1E7NCQpQ2RnNJuzb94PbxidMf_DSi0LynDYSj3TIf3lzMljP01lmt4p4VhirmwHA7OO7WvFfx9_ebIArPVMPbG9JCrEzZfG46J6dtYg" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="554" data-original-width="994" height="178" src="https://blogger.googleusercontent.com/img/a/AVvXsEjpMxo7tVZu4Ievc648Pok1IO3dUrg-AkUHFfPQPekRWn0DxAO6MEl4VWe-uBzx-2qkvGBqQ6_u0jmHeb3bTvL1E7NCQpQ2RnNJuzb94PbxidMf_DSi0LynDYSj3TIf3lzMljP01lmt4p4VhirmwHA7OO7WvFfx9_ebIArPVMPbG9JCrEzZfG46J6dtYg" width="320" /></a></div><br /><br /><p></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-6059049086703335482022-04-10T20:28:00.006-07:002023-02-21T01:40:46.033-08:00 DOS Nostalgia: On using a modern DOS workstation.<p><span style="font-family: arial;"><b>Premise.</b></span><span style="font-family: arial;"> </span></p><p><span style="font-family: arial;">This blog post is useless. And rambling. As it's useless the machine I'm typing this on, a Pentium 3 subnotebook from the 90ies. You have been warned!</span></p><p><span style="font-family: arial;">But, it might be entertaining, and I suspect many of the people doing what I do and reading what I write, are in a similar demographic and might be starting to be nostalgic, thinking of their formative years and wondering if they're worth revisiting...</span></p><p><span style="font-family: arial;"><b>Objectives.</b></span><span style="font-family: arial;"> </span></p><p><span style="font-family: arial;">I wanted to find a DOS machine, not for retrogaming (only), but to do actual "work". Even more narrowly, I had an idea of trying to compile an old DOS demo I made in the nineties, the only production of a short-lived Italian group called "day zero deflection" (you won't find it).</span></p><p><span style="font-family: arial;">Monotasking. No internet. These things are so appealing to me right now. One tries to escape the dopamine rush of doomscrolling on all the connected devices that surround us. The flesh is weak, and instead of trying to muster the required willpower, shopping for a hardware solution seems so much more attractive. Of course, it's a fool's errand, but hey, I said this post was going to be useless.</span></p><p><span style="font-family: arial;"><b>A Long, intermezzo of personal history.</b></span></p><p><i><span style="font-family: arial;">(skip this!)</span><span style="font-family: arial;"> </span></i></p><p><span style="font-family: arial;">It's interesting how memory works. So non-linear, and unreliable. I used a lot of computers in my life, and I started early, I began programming around six or seven years old.</span></p><p><span style="font-family: arial;">This past Christmas, as the pandemic eased up, I was able again to fly and spend time with my family in southern Italy. Found one of the Commodore 64 we had.</span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzWwLYxZkWUPjkdNJWMJ5c3wv1asBvA73o_b6Y3DJzO_Ab4x91yi7_ijuSlB4k0Q8w1Sh2X_btsIBnyh2eZtRu7iq3_-Lv71ObOy04t887LWqD4ZY8TmWPt6TwaBmLHru1035icpfGqnyb5mJODY6zox4e13EVMbJRNfcyoXjloPNMr4H0mTykhSFYRA/s1440/270847631_481698340039584_2722192632676612066_n.jpeg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1440" data-original-width="1440" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzWwLYxZkWUPjkdNJWMJ5c3wv1asBvA73o_b6Y3DJzO_Ab4x91yi7_ijuSlB4k0Q8w1Sh2X_btsIBnyh2eZtRu7iq3_-Lv71ObOy04t887LWqD4ZY8TmWPt6TwaBmLHru1035icpfGqnyb5mJODY6zox4e13EVMbJRNfcyoXjloPNMr4H0mTykhSFYRA/s320/270847631_481698340039584_2722192632676612066_n.jpeg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The c64 in question. Yes, it needed some love - albeit to my surprise, all my disks worked, with my childhood code! The video glitch is actually a quite mysterious defect, but it's a story for another time...</td></tr></tbody></table><p><span style="font-family: arial;">We, because I grew up with my older cousins, my mother is the last of eleven siblings, so I have a lot of cousins, many close to my house as my family used to be farmers, and thus had land that eventually became buildings, with many of my aunts and uncles ending up living in the same park.</span></p><p><span style="font-family: arial;">These older cousins taught me programming, and I was using their computers before having my own. In fact, the c64 I found is most likely theirs, as mine was eventually donated to some relative that needed it more.</span></p><p><span style="font-family: arial;">I remember a lot of this, in detail, albeit I don't know anymore what details are real and what ended up as images remixed from different time eras.</span></p><p><span style="font-family: arial;">We were in the basement of my aunt's villa, just next door to the building I grew up in, where we had an apartment on the top floor. We would transfer things between the two by lowering a rope from the balcony down to the villa's garden. Later, when we had PCs and network cards, we moved bits between the buildings, having suspended a coax cable that ran from the second floor of my building (where another cousin lived) to my floor, to the villa.</span></p><p><span style="font-family: arial;">The basement was originally the studio of my uncle, who was the town's priest. I was named after him. He and one of his sisters died in a car accident when I was little, so I am not sure I really remember of him, sadly.</span></p><p><span style="font-family: arial;">But I remember the basement, the Commodore 64, and later an 8086 with an external hard drive the same size and shape as the main unit. An amber monitor monochrome I think, or perhaps it was both amber and green, with a configuration switch.</span></p><p><span style="font-family: arial;">I remember all of the c64 games we played, easily. I remember bits of my coding journey, the <a href="https://archive.org/details/machine-code-for-beginners">books we used to study</a>, and once my cousin being dismayed that I could not figure how to make a cursor move on the screen (the math to go to the next/previous row), even if it was mostly a misunderstanding.</span></p><p><span style="font-family: arial;">I remember playing with my Amiga 600 there too, Body Blows - I switched to the Amiga after visiting... another cousin, this time, in Milan.</span></p><p><span style="font-family: arial;">I remember the first Pentium they had because it allowed me to use more 3d graphics software. 3d studio 4 without having to resort to software 387 emulation! At the time I had an IBM PS/2 with a 486sx which the seller persuaded my father would be better than a 486dx another guy was offering us - who needs a math coprocessor, and IBM is a much better brand than something home-made... And I know that numerous times I lost all the data on these computers that I did not own, often by typing "format" too fast and putting the wrong drive letter in.</span></p><p><span style="font-family: arial;">And then, nothing? Everything more modern than that I sort of lost, or rather, becomes more confused. I know the places I went shopping for (pirated) software and hardware, maybe some of the faces, not sure. </span></p><p><span style="font-family: arial;">I know used to lug my PC tower for the few kilometers that separated my house in Scafati from the "shop" (really a private apartment) that I used to go to in Pompei, as I was a kid, and did not have a car of course. </span></p><p><span style="font-family: arial;">And that tells me that I had lots of different PC configurations over the years, LOTS of them, AMD, Intel, Voodoo cards, a Matrox of some sorts, even a Sound Blaster AWE32 at a point, a CD-ROM and the early CD games, I remember the excitement for each new accessory and card, and the intense hate for cable and thermal management, especially on more modern setups. </span></p><p><span style="font-family: arial;">I remember scanners, the first were hand-held (Logitech ScanMan, then Trust), printers, joysticks, graphics tablets when I got into photography, the very first digital camera I had (<a href="https://www.dpreview.com/reviews/olympusc40z">I think an Olympus</a>). It's all "PC" for me, I have no idea of what I was using in which year.</span></p><p><span style="font-family: arial;">At a point, around university, I switched to primarily using laptops. Acer or Asus, something cheap and powerful but they would break often (cheap plastics). Then finally the MacBook Pro, and that one has remained a constant, still today my primary personal machine.</span></p><p><span style="font-family: arial;">So. My nostalgia is about three machines, really, even if I had dozens. The Commodore 64, the one I remember the most. I am eager to play around with that one more, I ordered all sorts of HW, but I have no intentions to use it "daily" - that one belongs to a museum. </span></p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPH8nIWZUTaece3KAHPpJohD0SSog5UaxrOWCKfkQqq-3RrPSRh8yCmt38-0fDm762twSm3OFvqdyyXVWCRpnV0Q5SRyUs2CLSIEi38XjPnlU_XcQZbA9iR1-va5lpF4Rw59HWs6MblmMYlH3c9uv4R6OSPC5U8YLympk3pax8_9jr3xxuCqTgYMpqqA/s1440/258778972_120908920467273_8761460821175258538_n.jpeg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1440" data-original-width="1440" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPH8nIWZUTaece3KAHPpJohD0SSog5UaxrOWCKfkQqq-3RrPSRh8yCmt38-0fDm762twSm3OFvqdyyXVWCRpnV0Q5SRyUs2CLSIEi38XjPnlU_XcQZbA9iR1-va5lpF4Rw59HWs6MblmMYlH3c9uv4R6OSPC5U8YLympk3pax8_9jr3xxuCqTgYMpqqA/s320/258778972_120908920467273_8761460821175258538_n.jpeg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The MisterFPGA c64 core is great and can output 50hz!</td></tr></tbody></table><p></p><p><span style="font-family: arial;">The Amiga, which for some reason I don't care as much for anymore, I suspect mostly because I was using it primarily for games so I did not create as much on it - I think that was the key.</span></p><p><span style="font-family: arial;">I had some graphic programs, but I was not a great 2d artist (DeluxePaint) and I did not understand enough of the 3d tools I happened to get my hands on (Real3D, VistaPro)... and I did no coding on it. At a point, I had a (pirate) copy of Amos, but no manual.</span></p><p><span style="font-family: arial;">Swapping disks, real or virtual, is also not fun.</span></p><p><span style="font-family: arial;">And then the PC, specifically the 486sx that I used both for programming again (QBasic, PowerBasic, Assembly then C with DJGPP), for graphics (Imagine, then Lightwave among others), photography, the internet...</span></p><p><span style="font-family: arial;">That 486 captures all of my PC memories, even if I know it's wrong. For example, during my C demo-coding times, I must have had a different computer, because the demo we were making would never run on a 486, they were sVGA, I even remember coding our sVGA layer, fixing a bug in the Matrox VESA bios - they were out of spec, not setting the viewport to be the same as the screen resolution when changing the latter, and many demos did run with the wrong line pitch because of that. Not mine! And the demo was, for some reason, writing buffers in separate R,G,B planes, with some MMX code I made to then shuffle them back into the display frame. </span></p><p><span style="font-family: arial;">So, it could not have been the 486 - but this is great, it gives me the freedom of not trying to recreate a particular setup but instead going for that same feeling and toolset I remember using, on an entirely different system.</span><span style="font-family: arial;"> </span></p><p><span style="font-family: arial;"><b>What do we "need"?</b></span><span style="font-family: arial;"> </span></p><p><span style="font-family: arial;">Here's the plan. First and foremost, we'll get a laptop, because I don't have space in my apartment, no, in my life, for retrocomputing desktop or tower. Also, I want to go to hipster coffee shops and write on my hipster retro workstation, as I am doing right now. </span></p><p><span style="font-family: arial;">I planned, regardless of the machine I would end up getting, to rip out the cells from the battery pack and reconstruct it - batteries are mostly a liability in old computers and I prefer the weight savings of not having them - this also means, technically, "luggable" computers could be considered.</span></p><p><span style="font-family: arial;">We will look for:</span></p><p></p><ul><li><span style="font-family: arial;">Something fast, because if I'm buying something it must be the best I can get! I don't even care about being period-accurate, this will be a monotasking monster, not a museum piece.</span></li><li><span style="font-family: arial;">Something I can program on, because hey, what if I like it and want to make modern retro-demos? Ideally, this means a Pentium I, Pentium Pro, or Pentium MMX, beautiful in-order CPUs with predictable pipelines I still know how to cycle-count (sort-of). But anything less than the dreadful Pentium 4 will do, P2 and P3s are OOO but still understandable enough.</span></li><li><span style="font-family: arial;">RAM is not an issue really, and we will max out whatever configuration we will settle on. </span></li><li><span style="font-family: arial;">Storage is not a problem either, because we will replace whatever HDD the machine comes with an SSD (yes, an actual SSD, albeit most people use compact-flash adapters instead) via an mSATA to PATA/IDE 2.5' enclosure which can fit any half-size ssd (I got a 64gb one just to be "safe" as you never know the limits of old motherboards and firmware. You do want to make sure that the machine did originally support hdds of a decent size (tens of gb) though.</span></li><li><span style="font-family: arial;">DOS-compatible (SoundBlaster-compatible) soundcard, is a must.</span></li><li><span style="font-family: arial;">A TFT screen, also is a must. The resolution doesn't really matter, but we want something as modern as possible because old LCDs were really terrible. Ideally, 640x480 would get us the best DOS compatibility, but in practice, it's not a problem.</span></li><li><span style="font-family: arial;">Ideally an sVGA card with good VESA/VBE compatibility, and with good scaling from the VGA resolutions (640x480 text, 320x200 graphics) to whatever the LCD resolution is (that means, either integer-scaling and the right LCD resolution or good quality filters when upsampling).</span></li><li><span style="font-family: arial;">An USB port is highly recommended, as we want to be able to plug in a USB storage device to easily transfer files from and to modern, internet-connected machines. Setting up networking, using PCMCIA cards, etc would be much more painful.</span></li><li><span style="font-family: arial;">We want a good keyboard. And, because we can, we want something cool looking, maybe an iconic piece of design, not some random garbage brand. Also, something that is easy to service.</span></li><li><span style="font-family: arial;">Reasonably priced. There is no way I burn 1000$ on this just because certain hardware is right now "hot", I find it borderline immoral.</span><span style="font-family: arial;"> </span></li></ul><p></p><p><span style="font-family: arial;"><b>Expectations vs Reality.</b></span></p><p><span style="font-family: arial;">After long, long deliberations, research on forums, scouting eBay and so on, I landed on an IBM ThinkPad 240x. The ThinkPads are amazing machines, easy to service, iconic, with great keyboards and the TrackPoint is useable in a pinch.</span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoPxgPYgri1Jk0fQ1TGUOnMtl-F_8alD92w1XfYRt3qp_l7S-usoBOtq0j92hhgbSdaUxf983Nub1GhB1ckf_h-gL0HmcxypPIeBkWQ0oZDef0B43ti5MBr1eESS9SIwT0FQbPsLvnjMjn-lYlL8Ze_xbIhlcz-RMw1y3Vq74A3FL9GbbfBdYlNnv8OA/s1440/275896315_1170718057003699_5576942662806072314_n.jpeg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1440" data-original-width="1440" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoPxgPYgri1Jk0fQ1TGUOnMtl-F_8alD92w1XfYRt3qp_l7S-usoBOtq0j92hhgbSdaUxf983Nub1GhB1ckf_h-gL0HmcxypPIeBkWQ0oZDef0B43ti5MBr1eESS9SIwT0FQbPsLvnjMjn-lYlL8Ze_xbIhlcz-RMw1y3Vq74A3FL9GbbfBdYlNnv8OA/s320/275896315_1170718057003699_5576942662806072314_n.jpeg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Beautiful! Pro-tip, a bit of 303 protectant makes the plastics look as new!</td></tr></tbody></table><p><span style="font-family: arial;">I paid around 200$ for it, you will see people getting these for 5$ at a garage sale or stuff like that, but I'm ok paying more for something that the seller verified it's running, has no issues, and so on.</span><span style="font-family: arial;"> More than that I think is crazy, but you do you...</span></p><p><span style="font-family: arial;">When it arrived it looked amazing. Yes, it had scratches on the top, and even some hairline cracks, one near a hinge and one on the bottom of the chassis, but these are not a problem as I planned to disassemble the thing anyway, see if I needed to clean the internals, replace batteries, check for any leak, re-apply thermal paste if needed and so on.</span></p><p><span style="font-family: arial;">Regardless of how much research you have done, the reality of the actual machine will surprise you in good and bad ways.</span></p><p><span style="font-family: arial;">All the hardware setup was trivial, and all the things I thought would be hard were not. </span></p><p><span style="font-family: arial;">I gutted the battery as planned (the cells were already a bit bulging). I feared the most for the initial OS setup, but my strategy worked flawlessly. I bought an IDE-to-USB adapter, connected the SSD in its SSD-to-IDE enclosure, <a href="http://theinstructionlimit.com/installing-ms-dos-6-22-on-a-486-without-a-floppy-drive-using-a-cf-to-ide-adapter">and mapped it as a virtual drive in a VirtualBox VM with Windows 98</a>. </span></p><p><span style="font-family: arial;">That allowed me to use Win98's fdisk and format to create something I knew would be recognized by the ThinkPad - I was not sure at all the same would have happened with modern tools. For extra safety, I also made two partitions under 2GB, to be able to format them with fat16, and the remainder of space was left in a third partition using fat32.</span></p><p><span style="font-family: arial;">Installing the OS was a breeze, and <a href="https://www.thinkwiki.org/wiki/Drivers">Lenovo still hosts all the latest IBM drivers</a> - Windows 98 just works.</span></p><p><span style="font-family: arial;">The first tiny hurdle I had to overcome was with the firmware update, IBM tools are adamant about having a charged battery to perform the update... which I clearly did not have. </span><span style="font-family: arial;">But in reality, the tool just calls a second executable, and even if the binaries have different extensions than the default the flashing tools wanted, it did not take too long to figure out the right switches to use.</span></p><p><span style="font-family: arial;">Upgrading the OS was also trivial, some people made install packs with all the official patches and lots of unofficial fixes (used <a href="https://www.mdgx.com/upd98me.php">mdgx</a> ones, <a href="https://www.htasoft.com/u98sesp/">htasoft</a> is an alternative), I just grabbed one and it mostly worked. The only issue I had is that the first time around the OS stopped booting with some DMA error, but disabling a specific patch having to do with enabling DMA on drives solved the issue. Re-installing the OS via the SSD is relatively fast, and I also used an old copy of Norton Ghost to create snapshots.</span></p><p><span style="font-family: arial;">To my surprise, even USB in DOS mostly worked (via <a href="http://bretjohnson.us/">Bret Johnson's</a> drivers, albeit many <a href="http://academy.delmar.edu/Courses/ITNW1454/Handouts/USB-Support(forDOS).html">options exist</a>). It is not 100% reliable, nor it's fast... but it does work! Same for the TrackPoint, via <a href="http://wiki.freedos.org/wiki/index.php/Mouse">cutemouse</a>.</span></p><p><span style="font-family: arial;">I ended up with the classic config.sys/autoexec.bat multiple-choice menu for things like emm386 and so on, I remember these being so painful to deal with, but in this case, it was all easy, probably also because this machine has so much RAM.</span><span style="font-family: arial;"> </span></p><p><span style="font-family: arial;">That is not to say there aren't problems. There are, but in a way, luckily for me, they seem to be unfixable, so I don't need to spend a ludicrous amount of time trying to overcome them (alright alright, I already did spend more time than it's worth, using DOSBox-debug and a few different decompilers to reverse an audio TSR... but I won't anymore I swear). And I did not foresee them.</span></p><p><span style="font-family: arial;">First, there is the VGA. I obsessed over resolutions, because I knew, that most laptops of this time <a href="https://sudonull.com/post/17165-Thinkpad-600-clean-DOS-in-2018">do not do resolution scaling</a> well. </span><span style="font-family: arial;">I had an epiphany though that allowed me to stop worrying about it. It's true that ideally, 640x480 makes you not have to worry about scaling. But! Laptops with 640x480 screens tend to be incredibly crappy and small LCDs, so much so, that the unscaled 640x480 area on a more modern laptop (say, an 800x600 panel) ends up covering a bigger screen estate and looking better!</span></p><p><span style="font-family: arial;">So, problem solved, right? Yes. If you get a card with good firmware! </span><span style="font-family: arial;">Unfortunately, the laptop I got has an obscure chipset that not only has crappy VESA/VBE support but is also not software-patchable via </span><a href="https://en.wikipedia.org/wiki/UniVBE" style="font-family: arial;">UniVBE</a><span style="font-family: arial;">. </span></p><p><span style="font-family: arial;"><a href="https://www.vogons.org/viewtopic.php?t=15190">Some TSRs help a bit (vbeplus, fastvid),</a> adding more modes by using other resolutions and forcing the viewport to clip, and you can play around with caching modes, but most DOS sVGA demos do not work. </span></p><p><span style="font-family: arial;">TBH, that was just plain unlucky, <a href="https://gona.mactar.hu/DOS_TESTS/">most laptops would not be this bad at sVGA</a>... but expect I guess to find at least one bit of "unlucky" hardware you did not think about in your machine.</span></p><p><span style="font-family: arial;">The other issue is with DOS audio and this is a biggie. </span></p><p><span style="font-family: arial;">Yes, I paid attention, and I got a chipset that does support DOS SoundBlaster emulation. But OMG, nobody told me it was going to be this crappy! It's basically useless, with most software just not working at all, especially when it comes to digital audio. The OPL3 FM music fares better, it tends to work, albeit it might not sound great.</span></p><p><span style="font-family: arial;">It's sad but most DOS software, especially demos, have a much higher chance of running in Windows 98 than in pure DOS, as when Windows is loaded the audio emulation is much, much better.</span></p><p><span style="font-family: arial;">This is something that apparently one simply has to live with. No PCI sound card has great DOS support, now I learned, especially with laptops, as <a href="http://dosdays.co.uk/topics/pci_sound_cards_in_dos.php">DOS audio support for PCI</a> relies on a combination of the right soundcard, the right motherboard and the right firmware. </span></p><p><span style="font-family: arial;">It doesn't help that often, when people online report audio working in DOS, they mean dos-under-windows, not pure dos... </span><span style="font-family: arial;">And you get a laptop from the pre-PCI era, then you're likely on a 486 or less, which not only will be worse in all other areas - but also many of these laptops used not to bundle any audio card at all, so they are strictly worse.</span></p><p><span style="font-family: arial;">That's not to say that there are no Pentium laptops with built-in ISA audio - there are, and probably I was again unlucky with the 240x being a rare combination of a dos-compatible-ish PCI on a "bad" motherboard (apparently using the intel 440mx chipset which does not support DDMA), but again... expect some issues, there are no perfect laptops, and even back in the day, there was hardly a configuration that would run everything flawlessly...</span></p><p><span style="font-family: arial;"><b>Conclusions.</b></span><span style="font-family: arial;"> </span></p><p><span style="font-family: arial;">Was it worth it? Should you do it? Yes and no...</span></p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2b8NXLqbR_WSoV-N4QITMUyitO_ki-afX1bMQU5qQEPVD_Qw9_7m_6kTw2fYyJVNumqOOoVGhlzMUvdwcpjOZ-MdetsQchOb9bGLgU44TSunk5dUll_HQdL_2x0jJzH2unfcLGcO2ZxWMUdaDVtmmb7VZQW_UyaD0guvcBDmoyXYtIgE3pTPa3Ut_9g/s1440/275940584_2020526171473360_6762627527591585210_n.jpeg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1440" data-original-width="1440" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2b8NXLqbR_WSoV-N4QITMUyitO_ki-afX1bMQU5qQEPVD_Qw9_7m_6kTw2fYyJVNumqOOoVGhlzMUvdwcpjOZ-MdetsQchOb9bGLgU44TSunk5dUll_HQdL_2x0jJzH2unfcLGcO2ZxWMUdaDVtmmb7VZQW_UyaD0guvcBDmoyXYtIgE3pTPa3Ut_9g/s320/275940584_2020526171473360_6762627527591585210_n.jpeg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">It's small!</td></tr></tbody></table><p></p><p><span style="font-family: arial;">For retro gaming, or in general, passive consumption (demos, etc), it's overall a terrible idea, I'm pretty confident all laptops would be terrible, and even most desktops.</span></p><p><span style="font-family: arial;">The early PC landscape was just a mess of incompatible devices, buggy, unpatched software, and crashes. You were lucky when things worked, and this is true today as well. DosBox is a million times more compatible than any real hardware. Yes, it has bugs, and lots of things can be more accurate, but on average it is better than real hardware.</span></p><p><span style="font-family: arial;">There are many DosBox builds out there, and I'm sure this is going to be quicky outdated, but at the time of writing I recommend:</span></p><p></p><ul><li><span style="font-family: arial;">On Windows, primarily <a href="https://dosbox-x.com/">DosBox-X</a></span></li><ul><li><span style="font-family: arial;">I also keep vanilla for <a href="https://www.vogons.org/viewtopic.php?t=7323">debugger-enabled</a> builds - you can even get a dosbox plugin for ida pro, but that's for another time, and <a href="https://yesterplay.net/dosboxece/">DosBox-ECE</a></span></li></ul><li><span style="font-family: arial;">On Mac, <a href="https://github.com/MaddTheSane/Boxer/releases">Boxer - Madds branch</a> and vanilla DosBox on Mac</span></li><ul><li><span style="font-family: arial;">Last time I tried, DosBox-X had issues on Mac with the mouse emulation - might have been fixed by now.</span></li></ul></ul><p></p><p><span style="font-family: arial;">On windows, and especially if you care about Windows of any kind, there is <a href="https://github.com/86Box/86Box">86box</a> (a fork of PCem) which is a lower-level, more accurate emulator. DosBox does not work great even with Win3.11, for some odd mouse emulation problems that seem to be different in each fork.</span></p><p><span style="font-family: arial;">If like me, you want to experience a monotasking machine that you can grab for a few hours at a time to play with a simpler, more focused experience, then I'd say these laptops are great fun!</span></p><p><span style="font-family: arial;">I'm even collecting a bit of a digital retro-library by mirroring old websites, often grabbed from the Wayback machine, and grabbing old magazines from the Internet Archive, to recreate the kind of reading materials I had back then...</span></p><p><span style="font-family: arial;">Overall, setting this up took me less time and energy than tinkering with a Raspberry Pi or say, trying to install a fully functional Linux on a random contemporary laptop. It's one of the least annoying projects I have embarked upon.</span></p><p><span style="font-family: arial;">My conscience feels ok too. It won't become garbage, I hate clutter, I hate having too much stuff, too many things I don't need in my life, especially digital crap that creates more problems than it really solves... With this one, I know I can sell or donate the hardware the moment I don't want to use it anymore, it's not going to be in a landfill, it's not another stupid gadget with a short lifespan.</span></p><p><span style="font-family: arial;">The best part, all the software is portable, DOS doesn't really care about the hardware, you only need to replace a few lines in your config.sys if you have specific drivers... so I can migrate all I have on this laptop to a DosBox setup (even today I do keep the two in sync) or a different machine.</span><span style="font-family: arial;"> </span></p><p><span style="font-family: arial;">Not bad. You want to try? Luckily it's easy, this is what I learned! </span><span style="font-family: arial;">You don't have to stress over the hardware (as I did), because <a href="https://www.vogons.org/viewtopic.php?t=70059&fbclid=IwAR3yHZMR_xe_FjuzRgN6QiuEZMCqyxIq0B37QBbUB44Oom3eCWoiWml6loc">none is perfect</a>.</span></p><p><span style="font-family: arial;">I went for something relatively "modern", a laptop that would have ran in its prime Windows 98/NT/2000 - and "downgraded" it to do mostly DOS - I think that's a good choice, but I don't think this ended up working much better or worse than any other option I was considering.</span></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com6tag:blogger.com,1999:blog-6950833531562942289.post-40586946683092035732022-02-02T17:05:00.018-08:002022-02-27T12:52:28.849-08:00 WTF is the Metaverse?!<p style="text-align: justify;"><span style="font-family: arial; font-size: x-small;"><i>Disclaimer! Yes, I work at Roblox. It's been a decade or so since I could pretend this space to be anonymous, and many years ago I made it clear that c0de517e/deadc0de = Angelo Pesce. And yes, my work makes me think about what this "metaverse" thing is more than the average person on the street (Roblox has been a metaverse company long, long before it was "cool"). I guess like an engineer at google might think about "the internet" more than the average person... But the following truly is not about what we are building at Roblox, which is something quite specific - these are my opinions, and other people might agree to some degree, and disagree with them.</i></span></p><p style="text-align: justify;"><span style="font-family: arial;">I don't like hype cycles.</span></p><p style="text-align: justify;"><span style="font-family: arial;">It is somewhat frustrating to see how supposedly experienced and rational people jump on the latest shiny bandwagon. At the same time, I guess it's comfortingly human. But that's a topic for another time...</span></p><p style="text-align: justify;"><span style="font-family: arial;">Thing is, the metaverse is undoubtedly "hot" right now, so hot that every company, regardless of what they do, wants to have a claim to it. Mostly harmless, even cute, and for some, validating years of effort pushing these ideas... But, at the same time, it dilutes the concept, it makes words mean little to nothing when you can slap them onto any product.</span></p><p style="text-align: justify;"><span style="font-family: arial;">So, let's give it a try and think really what is the metaverse, and how, if at all, is different from what we have today.</span></p><p style="text-align: justify;"><span style="font-family: arial;">In the most general sense, "the metaverse" evokes ideas of synthetic, alternative places for social interactions, entertainment, perhaps even work... living our lives.</span></p><p style="text-align: justify;"><span style="font-family: arial;">And let's set aside the possible dystopian scenarios - not the point of this, albeit, these are always important to seriously consider, while also reminding ourselves that they are levied against most society-affecting technology, from the printing press onwards.</span></p><p style="text-align: justify;"><span style="font-family: arial;">This definition is just plain... boring!</span></p><p style="text-align: justify;"><span style="font-family: arial;">It's boring because we have always been doing that, at least, since we had the ability to connect computers together. We are social animals, obviously, we want to imagine any new technology in a social space. BBS are alternative places for social interaction. And entertainment. And work. And from there on we had all kinds of shared virtual worlds, from IRC to the Mii Channel, from MUDs to World of Warcraft, from Club Penguin to Second Life, and so on. </span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhLkoCWNTMLYpySHbF0nmLSzM6N-OOCnYSQtxQSVpEp22obXCBu-kDI4dDO6fgPJrovUa3Ixq-VCzC8esJ7VfWKbJ_yQg7HEuEqmi5gZFDU72C5c9VfIwP9Q183QCGb-zIAr3y9H9JWyUk0npyNMwpgNV9PnScY-YcgXcwa8v_INv-kkSGk9G9Xr4Q8Yw" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="558" data-original-width="812" height="275" src="https://blogger.googleusercontent.com/img/a/AVvXsEhLkoCWNTMLYpySHbF0nmLSzM6N-OOCnYSQtxQSVpEp22obXCBu-kDI4dDO6fgPJrovUa3Ixq-VCzC8esJ7VfWKbJ_yQg7HEuEqmi5gZFDU72C5c9VfIwP9Q183QCGb-zIAr3y9H9JWyUk0npyNMwpgNV9PnScY-YcgXcwa8v_INv-kkSGk9G9Xr4Q8Yw=w400-h275" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">LucasFilm's <a href="https://web.stanford.edu/class/history34q/readings/Virtual_Worlds/LucasfilmHabitat.html">Habitat</a>. Now <a href="https://frandallfarmer.github.io/neohabitat-doc/docs/">live</a>!</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;">The entire internet fits the bill, through that lens, and we don't need a new word for old ideas - outside marketing perhaps.</span></p><p style="text-align: justify;"><span style="font-family: arial;">So, let's try to find some true meaning for this word. What's new now? Is it VR/AR/XR perhaps? Web 3.0 and NFTs? The "fediverse"?</span></p><p style="text-align: justify;"><span style="font-family: arial;">Or perhaps there is nothing new really, but we just run out of ideas, explored the space of conventional social media startups already, and now trying to see if some old concept can be successful, throw a few things at the wall and see what sticks...</span></p><p style="text-align: justify;"><span style="font-family: arial;">My thesis? <b>Agency.</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Agency is the real differentiating factor. </span></p><p style="text-align: justify;"><span style="font-family: arial;">Really, it's right there, staring at us. Like a high school kid facing an essay, sometimes it's good to look at the word itself, what does the dictionary tell us? Yes, we're going there: "In its most basic use, meta- describes a subject in a way that transcends its original limits, considering the subject itself as an object of reflection".</span></p><p style="text-align: justify;"><span style="font-family: arial;">If you're controlling your virtual, alternative, synthetic universe, you are creating something that might be spectacular, engaging, entertaining, powerful... but it's not a metaverse. </span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>Videogames are not the metaverse, not even MMORPGs... Sandboxes/UGC/modding is not the metaverse. Virtual worlds are not the metaverse! </b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Yes, I'm "disqualifying" Minecraft, Second Life, Gather.Town, GTA 5, Decentraland, Skyrim, Fortnite, Eve Online, the lot - not because of the quality of these products, but because we don't need new words for existing concepts, we really don't... </span></p><p style="text-align: justify;"><span style="font-family: arial;">Obviously, the line is somewhat blurry, but if you're making most of the rules you are "just" creating a world, with varying degrees of freedom.</span></p><p style="text-align: justify;"><span style="font-family: arial;">A metaverse is an alternative living space (universe... world...) that is mostly owned by the participants, not centrally directed. Users </span><span style="font-family: arial;">create, share creations and</span><span style="font-family: arial;"> </span><span style="font-family: arial;">make all of the rules (the meta- part).</span></p><p style="text-align: justify;"><span style="font-family: arial;">Why does this distinction matter? Why is it interesting? </span></p><p style="text-align: justify;"><span style="font-family: arial;">At a shallow level, obviously, it gives you more variety, than a single virtual world. It has all the interesting implications of any platform where you do not control content. You are not really asking people to enter your world or use your product, you are really there to provide a service for others to create what they want to create and market it, form communities, and engage with them...</span></p><p style="text-align: justify;"><span style="font-family: arial;">But I think it's more than that. This extra agency works to create a qualitatively different community, one that is centered around the creation and sharing of creations, an economy you might call it. Something quite different from passive consumption or social co-experience.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>Ironically, through this lens, most of Web 3.0 "gets is wrong"</b>, focusing on decentralizing a transaction ledger of virtual ownership, but making that ownership be simply parts of strictly controlled virtual universes. You own a certificate to a plot of digital land that someone else created and controls.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Regardless of the fact that you only own the certificate, and not the actual land, which can disappear at any moment... these kinds of worlds seem at best a coat of paint over very old and limited concepts.</span></p><p style="text-align: justify;"><span style="font-family: arial;">To me, even outside the blockchain, the entire notion of centralized versus decentralized systems, proprietary, closed versus interoperable open standards, all these concepts are really a "how", not a "what", they might be appropriate choices for a given product at a given time, but they should never be what the product "is".</span></p><p style="text-align: justify;"><span style="font-family: arial;">Without wanting to sell the metaverse as the future, I personally think that these "fake" or "weak" metaverses, together with the current hype, are what pushes people away from something that could be truly interesting.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Note also that nothing of this idea of social creativity, giving a platform for people to create and share in others' creations, has to do with new technologies. </span></p><p style="text-align: justify;"><span style="font-family: arial;">You don't need VR for any of this. You don't need hand tracking, machine learning and 3d scanning, you don't even need 3d rendering at all! </span></p><p style="text-align: justify;"><span style="font-family: arial;">These are all tools that might or might not be appropriate, but you could have perfectly great metaverses that are text only if you wanted to (remember MUDs? add the "meta" part...). And at the same time, just because you have some cool 3d technology, it does not mean you have something for the metaverse...</span></p><p style="text-align: justify;"><span style="font-family: arial;">E.g. you could have a server hosting community-created ROMs for a Commodore 64, add built-in networking to allow the ROMS to be about co-experience, add a pinch of persistence to allow people to express themselves, and you'd have a perfectly great, exciting metaverse... </span><span style="font-family: arial;">Or you could take something like </span><a href="https://wiki.xxiivv.com/site/uxn.html" style="font-family: arial;">UXN</a><span style="font-family: arial;"> and the vision of </span><a href="http://viznut.fi/texts-en/permacomputing.html" style="font-family: arial;">permacomputing</a><span style="font-family: arial;"> as the foundation, to reference something more contemporary...</span></p><p style="text-align: justify;"></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhruI5w11nTu8bCOekcGs4pvRs477NJpgIS_5ChQNLhhp_bjUMkHO7hCRMPvc_tnSqrHXqgg0aZWqDsZ-8csMGeKimRv7IJKcqt0dQEo_xUlS5W_z45wE4JlLv6dImx68QJ62tQbqDGxwTxXg7iCxW4YtZzWLrSSNJrC5lDzlCWALRZODyHEzA31DEQBg" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="555" data-original-width="740" height="300" src="https://blogger.googleusercontent.com/img/a/AVvXsEhruI5w11nTu8bCOekcGs4pvRs477NJpgIS_5ChQNLhhp_bjUMkHO7hCRMPvc_tnSqrHXqgg0aZWqDsZ-8csMGeKimRv7IJKcqt0dQEo_xUlS5W_z45wE4JlLv6dImx68QJ62tQbqDGxwTxXg7iCxW4YtZzWLrSSNJrC5lDzlCWALRZODyHEzA31DEQBg=w400-h300" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="https://www.pcmag.com/news/the-forgotten-world-of-bbs-door-games">BBS Door Games</a> - more proto-metaverse-y than most of today's virtual worlds.</td></tr></tbody></table><br /><p></p><p style="text-align: justify;"><span style="font-family: arial;"><b>In summary,</b> these are to me the key attributes of this metaverse idea:</span></p><p style="text-align: justify;"></p><ol style="text-align: left;"><li><span style="font-family: arial;">Inherently <b>Social</b> and interactive - as we are social animals and we want to inhabit spaces that allow socialization. This mostly means real-time networking, allowing users to connect, create and experience together.</span></li><li><span style="font-family: arial;"><b>User-Created:</b> participants have full agency over the worlds. Otherwise, you're just making a conventional virtual world. This is the "meta" part, you should not have control over the worlds, users should be able to take pieces of the universe and shape it, or completely subvert everything, own their creations. </span></li><ul><li><i style="font-family: arial;">Litmus test: if your users are "playing X", then X is not a metaverse. If they are playing X in Y, then Y might be a metaverse :)</i></li></ul><li><span style="font-family: arial;">Must have <b>Shareable Persistence</b>. Users should be able, in-universe, to store and share what they create - creating an economy, connecting worlds and people. And at the very least, the world must allow for a persistent, shared representation of self (<b>Avatars</b>). Otherwise, you're only making a piece of middleware, a game engine.</span></li></ol><p></p><p style="text-align: justify;"><span style="font-family: arial;">It's a social spin over the old, OG hacker's ethos of tinkering, creating with computers, owning their creations and sharing them. </span><span style="font-family: arial;">It has nothing to do with the particular implementation and it is not even about laws, copyright, or politics. It's a community that creates together, makes its own rules, and has full agency over these virtual creations. </span></p><p style="text-align: justify;"><span style="font-family: arial;">One more thing? In a truly creator-centric economy, you don't need to base all your revenue on ads, and the dark patterns they create.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Perhaps to shape that future it's more useful to revisit old, lost ideas, than thinking about shiny new overhyped toys. More <a href="http://worrydream.com/EarlyHistoryOfSmalltalk/">SmallTalk</a>'s idea of Personal Computing and <a href="https://wiki.xxiivv.com/site/plan9.html">Plan 9</a>, less NFTs and XR...</span></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com3tag:blogger.com,1999:blog-6950833531562942289.post-80159230714146103432020-12-27T14:03:00.010-08:002023-02-21T01:40:54.690-08:00Why Raytracing won't simplify AAA real-time rendering.<div style="text-align: justify;"><div><i style="font-family: arial;">"The big trick we are getting now is the final unification of lighting and shadowing across all surfaces in a game - games had to do these hacks and tricks for years now where we do different things for characters and different things for environments and different things for lights that move versus static lights, and now we are able to do all of that the same way for everything..."</i></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Who said this?</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Jensen Huang, presenting NVidia's RTX? </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Not quite... John Carmack. In 2001, at Tokyo's MacWorld, showing Doom 3 for the first time. It was though on an NVidia hardware, just a bit less powerful than today's 20xx/30xx series. A GeForce 3.</span></div><div><span style="font-family: arial;"><br /></span></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_qlc3zve4TOyiBeX3nPEXOHajAKTF-j0a8KqDv_jolX_Y0Azn30tWSjVjc8-DLLWVKs3jQ6ol6Ehwb9DvT5Y_rQsYVDnoHGbHmb7OLccj6YaRBnMG2AMMW75ZASc2i8eTXhcGyTWKBiOY/s1328/Capture3.PNG" style="margin-left: auto; margin-right: auto;"><span style="font-family: arial;"><img border="0" data-original-height="458" data-original-width="1328" height="221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_qlc3zve4TOyiBeX3nPEXOHajAKTF-j0a8KqDv_jolX_Y0Azn30tWSjVjc8-DLLWVKs3jQ6ol6Ehwb9DvT5Y_rQsYVDnoHGbHmb7OLccj6YaRBnMG2AMMW75ZASc2i8eTXhcGyTWKBiOY/w640-h221/Capture3.PNG" width="640" /></span></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: arial;">Can watch the recording on <a href="https://www.youtube.com/watch?v=80guchXqz14">YouTube</a> for a bit of nostalgia.</span></td></tr></tbody></table><span style="font-family: arial;"><br /></span><div><span style="font-family: arial;">And of course, the unifying technology at that time was stencil shadows - yes, we were at a time before shadowmaps were viable.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Now. I am not a fan of making long-term predictions, in fact, I believe there is a given time horizon after which things are mostly dominated by chaos, and it's just silly to talk about what's going to happen then.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">But if we wanted to make predictions, a good starting point is to look at the history, as history tends to repeat. What happened last time that we had significant innovation in rendering hardware? </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Did compute shaders lead to simpler rendering engines, or more complex? What happened when we introduced programmable fragment shaders? Simpler, or more complex? What about hardware vertex shaders - a.k.a. hardware transform and lighting...</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">And so on and so forth, we can go all the way back to the first popular accelerated video card for the consumer market, the 3dfx.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRgaHcQ7Z2zpQD1jQ-2Gk8TLlZP5-bRfyfMd81R62XULValwdnVqBuf-J-RxsflMK_QGbVPTm4o9NRLTKS7oHciRZ2DJhwYEvygix3Fqf9NMDbpDFEX4Vxqc0YrSSv62_shhLz6HVSusjW//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="796" data-original-width="1200" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRgaHcQ7Z2zpQD1jQ-2Gk8TLlZP5-bRfyfMd81R62XULValwdnVqBuf-J-RxsflMK_QGbVPTm4o9NRLTKS7oHciRZ2DJhwYEvygix3Fqf9NMDbpDFEX4Vxqc0YrSSv62_shhLz6HVSusjW/w320-h213/3570-front.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Memories... A 3dfx Voodoo. PCem has some emulation for these, if one wants to play...</td></tr></tbody></table></span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Surely it must have made things simpler, not having to program software rasterizers specifically for each game, for each kind of object, for each CPU even! No more assembly. No more self-modifying code, s-buffers, software clipping, BSPs... </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">No more crazy tricks to get textures on screen, we suddenly got it all done for us, for free! Z-buffer, anisotropic filtering, perspective correction... Crazy stuff we never could even dream of is now in hardware. </span></div><div><span style="font-family: arial;">Imagine that - overnight you could have taken the bulk of your 3d engine and deleted it. Did it make engines simpler, or more complex? </span></div><div><span style="font-family: arial;">Our shaders today, powered by incredible hardware, are much more code, and much more complexity, than the software rasterizers of decades ago!</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Are there reasons to believe this time it will be any different?</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><b>Spoiler alert: no. </b></span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">At least not in AAA real-time rendering. Complexity has nothing to do with technologies. </span></div><div><span style="font-family: arial;">Technologies can enable new products, true, but even the existence of new products is always about people first and foremost.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The truth is that our real-time rendering engines could have been dirt-simple ten years ago, there's nothing inherently complex in what we got right now.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Getting from zero to a reasonable, real-time PBR renderer is not hard. The equations are there, just render one light at a time, brute force shadowmaps, loop over all objects and shadows and you can get there. Use MSAA for antialiasing...</span></div><div><span style="font-family: arial;">Of course, you would need to trade-off performance for such relatively "brute-force" approaches, and some quality... But it's doable, and will look reasonably good.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Even better? Just download Unreal, and hire -zero- rendering engineers. </span><span style="font-family: arial;">Would you not be able to ship any game your mind can imagine?</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">The only reason we do not... is in people and products. It's organizational, structural, not technical.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We like our graphics to be cutting edge as graphics and performance still sell games, sell consoles, are talked about.</span></div><div><span style="font-family: arial;">And it's relatively inexpensive, in the grand scheme of things - rendering engineers are a small fraction of the engineering effort which in turn is not the most expensive part of making AAA games...</span></div><div><br /></div><div><span style="font-family: arial;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsgyJsj_GwlLUg9zwaPprZWQQiW0UIfAmDZbOvPrChxe6Fbarjqh0Dc7xtnneuVd0H1kGfSmXkKrsKMJ74K_eck5wFVDs8qEY42DEMxOhyphenhyphenm2lLQGVhzfYorecmRwLKE_Uakgf7dSBQfBVC//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="1080" data-original-width="1920" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsgyJsj_GwlLUg9zwaPprZWQQiW0UIfAmDZbOvPrChxe6Fbarjqh0Dc7xtnneuVd0H1kGfSmXkKrsKMJ74K_eck5wFVDs8qEY42DEMxOhyphenhyphenm2lLQGVhzfYorecmRwLKE_Uakgf7dSBQfBVC/w640-h360/red-dead-redemption-photo-mode.jpg" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">So pretty... Look at that sky. Worth its <a href="https://advances.realtimerendering.com/s2019/index.htm">complexity</a>, right?</td></tr></tbody></table><br /></span></div><div><span style="font-family: arial;">In AAA is perfectly ok to have someone work for say, a month, producing new, complicated code paths to save say, one millisecond in our frame time. It's perfectly ok often to spend a month to save a tenth of a millisecond!</span></div><div><span style="font-family: arial;">Until this equation will be true, we will always sacrifice engineering, and thus, accept bigger and bigger engines, more complex rendering techniques, in order to have larger, more beautiful worlds, rendered faster!</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">It has nothing to do with hardware nor it has anything to do with the inherent complexity of photorealistic graphics.</span></div><div><span style="font-family: arial;"> </span></div><div><span style="font-family: arial;">We write code because we're not in the business of making disruptive new games, AAA is not where risks are taken, it's where blockbuster productions are made. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">It's the nature of what we do, we don't run scrappy experimental teams, but machines with dozens of engineers and hundreds of artists. </span><span style="font-family: arial;">We're not trying to make the next Fortnite - that would require entirely different attitudes and methodologies.</span></div><div><br /></div><div><span style="font-family: arial;">And so, engineers gonna engineer, if you have a dozen rendering people on a game, its rendering will never be trivial - and once that's a thing that people do in the industry, it's hard not to do it, you have to keep competing on every dimension if you want to be at the top of the game.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><b>The cyclic nature of innovation.</b></span></div><div><br /></div><div><span style="font-family: arial;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5K9MTxdGJQ1lUCubQu7IL6KOkLOY5IQTdcnWRajXWzjRxdBjgOZDRe32XZMf-sIx2u7FBBkpvfXDzceQDJS9dW8zY3-Mj9IIOmjkXxE0ygEKck2dZXl0KBf-coTZI87087YYn8kq3SqAX//" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="475" data-original-width="301" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5K9MTxdGJQ1lUCubQu7IL6KOkLOY5IQTdcnWRajXWzjRxdBjgOZDRe32XZMf-sIx2u7FBBkpvfXDzceQDJS9dW8zY3-Mj9IIOmjkXxE0ygEKck2dZXl0KBf-coTZI87087YYn8kq3SqAX//" width="152" /></a></div><br /></span></div><div><span style="font-family: arial;">Another point of view, useful to make some prediction, comes from the classic works of Clayton Christensen on innovation. These are also mandatory reads if you want to understand the natural flow of innovation, from disruptive inventions to established markets.</span></div><div><span style="font-family: arial;"> </span></div><div><span style="font-family: arial;">One of the phenomena that Christensen observes is that technologies evolve in cycles of commoditization, bringing costs down and scaling, and de-commoditization, leveraging integrated, proprietary stacks to deliver innovation.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">In AAA games, rendering has not been commoditized, and the trend does not seem going towards commoditization yet. </span></div><div><span style="font-family: arial;">Innovation is still the driving force behind real-time graphics, not scale of production, even if we have been saying for years, perhaps decades that we were at the tipping point, in practice we never seemed to reach it.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We are not even, at least in the big titles, close to the point where production efficiency for artists and assets are really the focus.</span></div><div><span style="font-family: arial;">It's crazy to say, but still today our rendering teams typically dwarf the efforts put into tooling and asset production efficiency. </span></div><div><br /></div><div><span style="font-family: arial;">We live in a world where it's imperative for most AAA titles to produce content at a steady pace. Yet, we don't see this percolating in the technology stack, look at the actual engines (if you have experience of them), look at the talks and presentations at conferences. We are still focusing on features, quality and performance more than anything else.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We do not like to accept tradeoffs on our stacks, we run on tightly integrated technologies because we like the idea of customizing them to the game specifics - i.e. we have not embraced open standards that would allow for components in our production stacks to be shared and exchanged.</span></div><div><br /></div><div><span style="font-family: arial;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijB5szlUvpnu_4QFSDPkWihhMK0D5j85jJArlJrw0aCvZPZ-X8OytPb6r4g_09pFwVfAhL1jxfs7yFhd6xef7MT49c8lJGT27mkkLS84Iw2om1OBOBy2PP1dnonHv7k8rnZTQtkrEalRxj//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="634" data-original-width="1200" height="211" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijB5szlUvpnu_4QFSDPkWihhMK0D5j85jJArlJrw0aCvZPZ-X8OytPb6r4g_09pFwVfAhL1jxfs7yFhd6xef7MT49c8lJGT27mkkLS84Iw2om1OBOBy2PP1dnonHv7k8rnZTQtkrEalRxj/w400-h211/teaser.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Alita - rendered with Weta's proprietary (and RenderMan-compatible) <a href="https://jo.dreggn.org/path-tracing-in-production/">Manuka</a></td></tr></tbody></table></span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">I do not think this trend will change, at the top end, for the next decade or so at least, the only time horizon I would even care to make predictions.</span></div><div><span style="font-family: arial;">I think we will see a focus on efficiency of the artist tooling, this shift in attention is already underway - but engines themselves will only keep growing in complexity - same for rendering overall.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We see just recently, in the movie industry (which is another decent way of "predicting" the future of real-time) that production pipelines are becoming somewhat standardized around common interchange formats.</span></div><div><span style="font-family: arial;">For the top studios, rendering itself is not, with most big ones running on their own proprietary path-tracing solutions...</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><b>So, is it all pain? And it will always be?</b></span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">No, not at all! </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">We live in a fantastic world full of opportunities for everyone. There is definitely a lot of real-time rendering that has been completely commoditized and abstracted.</span></div><div><span style="font-family: arial;">People can create incredible graphics without knowing anything at all of how things work underneath, and this is definitely something incredibly new and exciting.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Once upon a time, you had to be John friggin' Carmack (and we went full circle...) to make a 3d engine, create Doom, and be legendary because of it. Your hardcore ability of pushing pixels made entire game genres that were impossible to create without the very best of technical skills.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKxzKpc9yiYiEXKPop6TNSxYK31k1-xRkqx361N_wWw8gLWgWifooZhmBtnl9DvJaBtb0E0EaAzizrlXzMRBz_OPbk8FOKzdDeAGZFTfy4_kXNnjDiP4ALTSRsLkK7Lu_ePO2oQBuzfskC//" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="1246" data-original-width="1992" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKxzKpc9yiYiEXKPop6TNSxYK31k1-xRkqx361N_wWw8gLWgWifooZhmBtnl9DvJaBtb0E0EaAzizrlXzMRBz_OPbk8FOKzdDeAGZFTfy4_kXNnjDiP4ALTSRsLkK7Lu_ePO2oQBuzfskC/w400-h250/Screen+Shot+2020-12-27+at+1.51.47+PM.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="https://threejs.org/">https://threejs.org/</a> frontpage.</td></tr></tbody></table></span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Today? I believe a FPS templates ships for free with Unity, you can download Unreal with its source code for free, you have Godot... All products that invest in art efficiency and ease of use first and foremost.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Everyone can create any game genre with little complexity, without caring about technology - the complicated stuff is only there for cutting-edge "blockbuster" titles where bespoke engines matter, and only to some better features (e.g. fidelity, performance etc), not to fundamentally enable the game to exist...</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">And that's already professional stuff - we can do much better!</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Three.js is the most popular 3d engine on github - you don't need to know anything about 3d graphics to start creating. We have Roblox, Dreams, Minecraft and Fortnite Creative. We have Notch, for real-time motion graphics...</span></div><div><span style="font-family: arial;">Computer graphics has never been simpler, and at the same time, at the top end, never been more complex.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNqmNU_gxOBzVJMukQshIZB3qmVrU6V4b7tJfFzReI2WtG-h9907O0BVLgR6TWMS9y-9UNerjk56vBixPCF1vXfL97Ihf7ENXoyCEFj3l0WuNFQFpBJm2TzoY6bONK4euttNLXTA34vA7Y//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="839" data-original-width="1650" height="326" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNqmNU_gxOBzVJMukQshIZB3qmVrU6V4b7tJfFzReI2WtG-h9907O0BVLgR6TWMS9y-9UNerjk56vBixPCF1vXfL97Ihf7ENXoyCEFj3l0WuNFQFpBJm2TzoY6bONK4euttNLXTA34vA7Y/w640-h326/joseph-mcgrae-robloxscreenshot20201225-154707203.jpg" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Roblox <a href="https://www.artstation.com/artwork/xJoW0X">creations</a> are completely tech-agnostic.</td></tr></tbody></table><br /></span></div><div><span style="font-family: arial;"><b>Conclusions</b></span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">AAA will stay AAA - and for the foreseeable future it will keep being wonderfully complicated.</span></div><div><span style="font-family: arial;">Slowly we will invest more in productivity for artists and asset production - as it really matters for games - but it's not a fast process.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">It's probably easier for AAA to become relatively irrelevant (compared to the overall market size - that expands faster in other directions than in the established AAA one) - than for it to radically embrace change.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Other products and other markets is where real-time rendering is commoditized and radically different. </span><span style="font-family: arial;">It -is- already, a</span><span style="font-family: arial;">ll these products already exist, and we already have huge market segments that do not need to bother at all with technical details. </span><span style="font-family: arial;">And the quality and scope of these games grows year after year.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">This market was facilitated by the fact that we have 3d hardware acceleration pretty much in any device now - but at the same time </span><span style="font-family: arial;">n</span><span style="font-family: arial;">ew h</span><span style="font-family: arial;">ardware is not going to change any of that.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Raytracing will only -add- complexity at the top end. </span><span style="font-family: arial;">It might make certain problems simpler, perhaps <span style="font-size: xx-small;">(note - right now people seem to underestimate how hard is to make good RT-shadows or even worse, RT-reflections, which are truly hard...)</span>, but it will also make the overall effort to produce a AAA frame bigger, not smaller - like all technologies before it.</span></div><div><span style="font-family: arial;">We'll see incredible hybrid techniques, and if we have today dozens of ways of doing shadows and combining signals to solve the rendering equation in real-time, we'll only grow these more complex - and wonderful, in the future.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">Raytracing</span><span style="font-family: arial;"> will eventually percolate to the non-AAA eventually too, as all technologies do. </span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">But that won't change complexity or open new products there either because people who are making real-time graphics with higher-level tools already don't have to care about the technology that drives them - technology there will always evolve under the hood, never to be seen by the users...</span></div></div>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com6tag:blogger.com,1999:blog-6950833531562942289.post-1360098209484849822020-12-17T00:10:00.011-08:002020-12-18T15:27:33.823-08:00 Hallucinations re: the rendering of Cyberpunk 2077<p style="text-align: justify;"><span style="font-family: arial;"><b>Introduction</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Two curses befall rendering engineers. First, we lose the ability to look at reality without being constantly reminded of how fascinatingly hard it is to solve light transport and model materials.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Second, when you start playing any game, you cannot refrain from trying to reverse its rendering technology (which is particularly infuriating for multiplayer titles - stop shooting at me, I'm just here to look how rocks cast shadows!).</span></p><p style="text-align: justify;"><span style="font-family: arial;">So when I bought Cyberpunk 2077 I had to look at how it renders a frame. It's very simple to take RenderDoc captures of it, so I had really no excuse.</span></p><p style="text-align: justify;"><span style="font-family: arial;">The following are speculations on its rendering techniques, observations made while skimming captures, and playing a few hours.</span></p><p style="text-align: justify;"><span style="font-family: arial;">It's by no means a serious attempt at reverse engineering. </span><span style="font-family: arial;">For that, I lack both the time and the talent. I also rationalize doing a bad job at this by the following excuse: it's actually better this way. </span></p><p style="text-align: justify;"><span style="font-family: arial;">I think it's better to dream about how rendering (or anything really) could be, just with some degree of inspiration from external sources (in this case, RenderDoc captures), rather than exactly knowing what is going on.</span></p><p style="text-align: justify;"><span style="font-family: arial;">If we know, we know, there's no mystery anymore. It's what we do not know that makes us think, and sometimes we exactly guess what's going on, but other times we do one better, we hallucinate something new... Isn't that wonderful?</span></p><p style="text-align: justify;"><span style="font-family: arial;">The following is mostly a read-through of a single capture. I did open a second one to try to fill some blanks, but so far, that's all.</span></p><p style="text-align: justify;"></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiewj8SwNU_N_hCyL_N3fYXyw8xCfgih5wxwX2GTzRkYu-lmSiIS67kZyFuq-I53pNmpDXaJoyXs20HtcWVNZkZnZtnQpHK1Ji9D6RSOMf0pE5vCf3RL73KGThaPN3WxogEGhq7CETGL4zl/s957/Capture.PNG" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="536" data-original-width="957" height="224" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiewj8SwNU_N_hCyL_N3fYXyw8xCfgih5wxwX2GTzRkYu-lmSiIS67kZyFuq-I53pNmpDXaJoyXs20HtcWVNZkZnZtnQpHK1Ji9D6RSOMf0pE5vCf3RL73KGThaPN3WxogEGhq7CETGL4zl/w400-h224/Capture.PNG" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">This is the frame we are going to look at.</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;">I made the captures at high settings, without RTX or DLSS as RenderDoc does not allow these (yet?). I disabled motionblur and other uninteresting post-fx and made sure I was moving in all captures to be able to tell a bit better when passes access previous frame(s) data.</span></p><p style="text-align: justify;"><span style="font-family: arial;">I am also not relying on insider information for this. Makes everything easier and more fun.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>The basics</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">At a glance, it doesn't take long to describe the core of Cyberpunk 2077 rendering.</span></p><p style="text-align: justify;"><span style="font-family: arial;">It's a classic deferred renderer, with a fairly vanilla g-buffer layout. We don't see the crazy amount of buffers of say, Suckerpunch's PS4 launch Infamous:Second Son, nor complex bit-packing and re-interpretation of channels.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsFUZHiG48oyp5Igr5H6xG7hu8UtcJUkfvXwq0KbqhKAICSlOGnTnPE0RE5ZzxVk0V5uGJCGz5c4pKf8rGRLf5TrtEbLZ-YfyQbeQfoyJ61UiuiuX3tbqNOFlHhtPWyexNmM0fIk4aSeGn/s1690/001.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="948" data-original-width="1690" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsFUZHiG48oyp5Igr5H6xG7hu8UtcJUkfvXwq0KbqhKAICSlOGnTnPE0RE5ZzxVk0V5uGJCGz5c4pKf8rGRLf5TrtEbLZ-YfyQbeQfoyJ61UiuiuX3tbqNOFlHhtPWyexNmM0fIk4aSeGn/w640-h360/001.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Immediately recognizable g-buffer layout</td></tr></tbody></table><ul><li><span style="font-family: arial;">10.10.10.2 Normals, with the 2-bit alpha reserved to mark hair</span></li><li><span style="font-family: arial;">10.10.10.2 Albedo. Not clear what the alpha is doing here, it seems to just be set to one for everything drawn, but it might be only the captures I got</span></li><li><span style="font-family: arial;">8.8.8.8 Metalness, Roughness, Translucency and Emissive, in this order (RGBA)</span></li><li><span style="font-family: arial;">Z-buffer and Stencil. The latter seems to isolate object/material types. Moving objects are tagged. Skin. Cars. Vegetation. Hair. Roads. Hard to tell / would take time to identify the meaning of each bit, but you get the gist...</span></li></ul><p></p><p style="text-align: justify;"><span style="font-family: arial;">If we look at the frame chronologically, it starts with a bunch of UI draws (that I didn't investigate further), a bunch of copies from a CPU buffer into VS constants, then a shadowmap update (more on this later), and finally a depth pre-pass.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLcId3tGBV9fcaPKcEdVMrW-Qz9KFSlYFRWu4HCKMeh_3T8lRrpwHoXK0YCJqC_7oMPOPkQCVAnkBhPDWf2X8ms4dZKWBMfizFFDTz7TpHeRrJILOyFpZdbREHpajlQLk8Nz2BQodIdmzB/s1056/002.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="592" data-original-width="1056" height="358" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLcId3tGBV9fcaPKcEdVMrW-Qz9KFSlYFRWu4HCKMeh_3T8lRrpwHoXK0YCJqC_7oMPOPkQCVAnkBhPDWf2X8ms4dZKWBMfizFFDTz7TpHeRrJILOyFpZdbREHpajlQLk8Nz2BQodIdmzB/w640-h358/002.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Some stages of the depth pre-pass.</td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;">This depth pre-pass is partial (not drawing the entire scene) and is only used to reduce the overdraw in the subsequent g-buffer pass.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Basically, all the geometry draws are using instancing and some form of bindless textures. I'd imagine this was a big part of updating the engine from The Witcher 3 to contemporary hardware. </span></p><p style="text-align: justify;"><span style="font-family: arial;">Bindless also makes it quite annoying to look at the capture in renderDoc unfortunately - by spot-checking I could not see too many different shaders in the g-buffer pass - perhaps a sign of not having allowed artists to make shaders via visual graphs? </span></p><p style="text-align: justify;"><span style="font-family: arial;">Other wild guesses: I don't see any front-to-back sorting in the g-buffer, and the depth prepass renders all kinds of geometries, not just walls, so it would seem that there is no special authoring for these (brushes, forming a BSP) - nor artists have hand-tagged objects for the prepass, as some relatively "bad" occluders make the cut. I imagine that after culling a list of objects is sorted by shader and from there instanced draws are dynamically formed on the CPU.</span></p><span style="font-family: arial;">The opening credits do not mention Umbra (which was used in The Witcher 3) - so I guess CDPr rolled out their own visibility solution. Its effectiveness is really hard to gauge, as visibility is a GPU/CPU balance problem, but there seem to be quite a few draws that do not contribute to the image, for what's worth. It also looks like that at times the rendering can display "hidden" rooms, so it looks like it's not a cell and portal system - I am guessing that for such large worlds it's impractical to ask artists to do lots of manual work for visibility.</span><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNnmDb2QhIqOpP3rjXcUZVnW0N3AtvfY49KonHdgG4fk2r9_HNg3m8Q8rpkDOJ6PTqjWur2J9GdTFSbX5rlxJwrIIf3Mh9dovvg9IrLA7HgQYv0HStUwpdtjuuIHHh97dvsBRYEz3yQdMG/s1300/003.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="486" data-original-width="1300" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNnmDb2QhIqOpP3rjXcUZVnW0N3AtvfY49KonHdgG4fk2r9_HNg3m8Q8rpkDOJ6PTqjWur2J9GdTFSbX5rlxJwrIIf3Mh9dovvg9IrLA7HgQYv0HStUwpdtjuuIHHh97dvsBRYEz3yQdMG/w640-h240/003.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A different frame, with some of the pre-pass. <br />Looks like some non-visible rooms are drawn then covered by the floor - which might hint at culling done without old-school brushes/BSP/cell&portals?</td></tr></tbody></table></span><p style="text-align: justify;"><span style="font-family: arial;">Lastly, I didn't see any culling done GPU side, with depth pyramids and so on, no per-triangle or cluster culling or predicated draws, so I guess all frustum and occlusion culling is CPU-side.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><i>Note: people are asking if "bad" culling is the reason for the current performance issues, I guess meaning on ps4/xb1. This inference cannot be done, nor the visibility system can be called "bad" - as I wrote already. FWIW - it seems mostly that consoles struggle with memory and streaming more than anything else. Who knows...</i></span></p><p style="text-align: justify;"><span style="font-family: arial;">Let's keep going... After the main g-buffer pass (which seems to be always split in two - not sure if there's a rendering reason or perhaps these are two command buffers done on different threads), there are other passes for moving objects (which write motion vectors - the motion vector buffer is first initialized with camera motion).</span></p><p style="text-align: justify;"><span style="font-family: arial;">This pass includes avatars, and the shaders for these objects do not use bindless (perhaps that's used only for world geometry) - so it's much easier to see what's going on there if one wants to.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Finally, we're done with the main g-buffer passes, depth-writes are turned off and there is a final pass for decals. Surprisingly these are pretty "vanilla" as well, most of them being mesh decals.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Mesh decals bind as inputs (a copy of) the normal buffer, which is interesting as one might imagine the 10.10.10 format was chosen to allow for easy hardware blending, but it seems that some custom blend math is used as well - something important enough to pay for the price of making a copy (on PC at least).</span></p><p style="text-align: justify;"></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiqNDYiY-MK7so-RnRKKrb039Dpu1do3ycNQJ0Or4Xvj3YBlOCYtjr6XmrkshAtpQ05jfI8t0XFUVz1akvUXxOOajiu9IR_8mFWYE-6-zFbwOq3FjbHlGs08l0EmXx_vucQC1xUPsWp1Dw/s1086/004.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="606" data-original-width="1086" height="358" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiqNDYiY-MK7so-RnRKKrb039Dpu1do3ycNQJ0Or4Xvj3YBlOCYtjr6XmrkshAtpQ05jfI8t0XFUVz1akvUXxOOajiu9IR_8mFWYE-6-zFbwOq3FjbHlGs08l0EmXx_vucQC1xUPsWp1Dw/w640-h358/004.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A mesh decal - note how it looks like the original mesh with the triangles that do not map to decal textures removed.</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;">It looks like only triangles carrying decals are rendered, using special decal meshes, but other than that everything is remarkably simple. It's not bindless either </span><span style="font-family: arial;">(only the main static geometry g-buffer pass seems to be)</span><span style="font-family: arial;">, so it's easier to see what's going on here.</span></p><p style="text-align: justify;"><span style="font-family: arial;">At the end of the decal pass we see sometimes projected decals as well, I haven't investigated dynamic ones created by weapons, but the static ones on the levels are just applied with tight boxes around geometry, I guess hand-made, without any stencil-marking technique (which would probably not help in this case) to try to minimize the shaded pixels.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Projected decals do bind depth-stencil as input as well, obviously as they need the scene depth, to reconstruct world-space surface position and do the texture projection, but probably also to read stencil and avoid applying these decals on objects tagged as moving.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6flBCRJpMqtaglNIcyfVhSTj0o2lJ0cSvBi4xLebcuAvZYJlOvnj4HqvnplQpaqNkU0YLIn0QlV56MpuYOhOxbowiHdOWKrXisFoPr3Mu5T9WBhEWMWDjgj6mVOCI2s72q_o5E7XNP_i8/s1520/005.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="438" data-original-width="1520" height="184" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6flBCRJpMqtaglNIcyfVhSTj0o2lJ0cSvBi4xLebcuAvZYJlOvnj4HqvnplQpaqNkU0YLIn0QlV56MpuYOhOxbowiHdOWKrXisFoPr3Mu5T9WBhEWMWDjgj6mVOCI2s72q_o5E7XNP_i8/w640-h184/005.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A projected decal, on the leftmost wall (note the decal box in yellow)</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;">As for the main g-buffer draws, many of the decals might end up not contributing at all to the image, and I don't see much evidence of decal culling (as some tiny ones are draws) - but it also might depend on my chosen settings.</span></p><p style="text-align: justify;"><span style="font-family: arial;">The g-buffer pass is quite heavy, but it has lots of detail and it's of course the only pass that depends on scene geometry, a fraction of the overall frame time. E.g. look at the normals on the ground, pushed beyond the point of aliasing. At least on this PC capture, textures seem even biased towards aliasing, perhaps knowing that temporal will resolve them later (which absolutely does in practice, rotating the camera often reveals texture aliasing that immediately gets resolved when stopped - not a bad idea, especially as noise during view rotation can be masked by motion blur).</span></p><p style="text-align: justify;"></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFhOfGNIcGpfT4w_Am0P2NQL38_SLmUu18pAOtboun6m9gt3Fp08oOirfYrZT16pBtbhVQtpp7MTSBdBXmkI87dCUB8E57pSE2ud1ZZ48FbGE_uqwT8ui3Y9FLfsHkV6VI8xN360YEVe4L/s1010/Capture.PNG" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="772" data-original-width="1010" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFhOfGNIcGpfT4w_Am0P2NQL38_SLmUu18pAOtboun6m9gt3Fp08oOirfYrZT16pBtbhVQtpp7MTSBdBXmkI87dCUB8E57pSE2ud1ZZ48FbGE_uqwT8ui3Y9FLfsHkV6VI8xN360YEVe4L/w320-h245/Capture.PNG" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">1:1 crop of the final normal buffer</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;"><b>A note re:Deferred vs Forward+</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Most state-of-the-art engines are deferred nowadays. Frostbite, Guerrilla's Decima, Call of Duty BO3/4/CW, Red Dead Redemption 2, Naughty Dog's Uncharted/TLOU and so on.</span></p><p style="text-align: justify;"><span style="font-family: arial;">On the other hand, the amount of advanced trickery that Forward+ allows you is unparalleled, and it has been adopted by a few to do truly incredible rendering, see for example the latest Doom games or have a look at the mind-blowing tricks behind Call of Duty: Modern Warfare / Warzone (and the previous Infinity Warfare which was the first time that COD line moved from being a crazy complex forward renderer to a crazy complex forward+).</span></p><p style="text-align: justify;"><span style="font-family: arial;">I think the jury is still out on all this, and as most thing rendering (or well, coding!) we don't know anything about what's optimal, we just make/inherit choices and optimize around them. </span></p><p style="text-align: justify;"><span style="font-family: arial;">That said, I'd wager this was a great idea for CP2077 - and I'm not surprised at all to see this setup. As we'll see in the following, CP2077 does not seem to have baked lighting, relying instead on a few magic tricks, most of which operating in screen-space.</span></p><p style="text-align: justify;"><span style="font-family: arial;">For these to work, you need before lighting to know material and normals, so you need to write a g-buffer anyways. Also you need temporal reprojection, so you want motion vectors and to compute lighting effects in separate passes (that you can then appropriately reproject, filter and composite).</span></p><p style="text-align: justify;"><span style="font-family: arial;">I would venture to say also that this was done not because of the need for dynamic GI - there's very little from what I've seen in terms of moving lights and geometry is not destructible. </span><span style="font-family: arial;">I imagine instead, this is because the storage and runtime memory costs of baked lighting would be too big. Plus, it's easier to make lighting interactive for artists in such a system, rather than trying to write a realtime path-tracer that accurately simulates what your baking system results would be...</span></p><div><span style="font-family: arial;">Lastly, as we're already speculating things, I'd imagine that CDPr wanted really to focus on artists and art. A deferred renderer can help there in two ways. First, it's performance is less coupled with the number of objects and vertices on screen, as only the g-buffer pass depends on them, so artists can be a smidge less "careful" about these. </span></div><div><span style="font-family: arial;">Second, it's simpler, overall - and in an open-world game you already have to care about so many things, that having to carefully tune your gigantic foward+ shaders for occupancy is not a headache you want to have to deal with...</span></div><p style="text-align: justify;"><span style="font-family: arial;"><b>Lighting part 1: Analytic lights</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Obviously, no deferred rendering analysis can stop at the g-buffer, we split shading in two, and we have now to look at the second half, how lighting is done.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Here things become a bit dicier, as in the modern age of compute shaders, everything gets packed into structures that we cannot easily see. Even textures can be hard to read when they do not carry continuous data but pack who-knows-what into integers.</span></p><p style="text-align: justify;"></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjgbjTUeofqHBlxybWbw8SK2v0niDzxGO4Br0Pl6tI4cdCIX0Qy18vITFRG0eyHuBtjviV1BcWKmX9c0Ivrz3AhOPaW4Ugez-zYgnjdNBHWB_tTQJCRriOB2lRfS8zF-o2wClfsrLB2DQY/s1614/006B.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="662" data-original-width="1614" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjgbjTUeofqHBlxybWbw8SK2v0niDzxGO4Br0Pl6tI4cdCIX0Qy18vITFRG0eyHuBtjviV1BcWKmX9c0Ivrz3AhOPaW4Ugez-zYgnjdNBHWB_tTQJCRriOB2lRfS8zF-o2wClfsrLB2DQY/w640-h262/006B.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Normal packing and depth pyramid passes.</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><br /></div><span style="font-family: arial;">Regardless, it's pretty clear that after all the depth/g-buffer work is said and done, a uber-summarization pass kicks in taking care of a bunch of depth-related stuff.</span></div><div><span style="font-family: arial;"><br /></span></div><div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHxkkJcTfYkpUQ5UH4YXYcoJIw1Lez5dHEeK7D3XgMOuDUlL1hnX2_Y94_7OoRK4vv1xIRXfIe5B8sspLHBdGyZB73kh_lHElQIUpdereSgwr0NC3R32XZhjfqXwbUa7MRa1Lj6QdgpXJG/s734/Capture.PNG" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="393" data-original-width="734" height="214" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHxkkJcTfYkpUQ5UH4YXYcoJIw1Lez5dHEeK7D3XgMOuDUlL1hnX2_Y94_7OoRK4vv1xIRXfIe5B8sspLHBdGyZB73kh_lHElQIUpdereSgwr0NC3R32XZhjfqXwbUa7MRa1Lj6QdgpXJG/w400-h214/Capture.PNG" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">RGBA8 packed normal (&roughness). Note the speckles that are a tell-tale of best-fit-normal encoding.<br />Also, note that this happens after hair rendering - which we didn't cover.</td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;">It first packs normal and roughness into a RGBA8 using Crytek's lookup-based best-fit normal encoding, then it creates a min-max mip pyramid of depth values.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJ3ykQacKyHy95wfP1msHF3oWkq4Q32NIhFnhClfzYYWddgELGQDYMYgiUgSGDhJ0cUDi9AdonnjxSdR0xWGQ6Gd-jO6zgAmdqJKFEVXMJjsaj3gV47qxTsZA12i5yof9_v_4W5ol00cw4//" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="420" data-original-width="1536" height="176" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJ3ykQacKyHy95wfP1msHF3oWkq4Q32NIhFnhClfzYYWddgELGQDYMYgiUgSGDhJ0cUDi9AdonnjxSdR0xWGQ6Gd-jO6zgAmdqJKFEVXMJjsaj3gV47qxTsZA12i5yof9_v_4W5ol00cw4/w640-h176/006C.png" width="640" /></a></span></div><p></p><p style="text-align: justify;"><span style="font-family: arial;">The pyramid is then used to create what looks like a volumetric texture for clustered lighting.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0TuSagLTQtW0blyN3YlM7H9iqm3mPDQf7VZdnXxh-vR54mWtnNeZf7zuOmQJjUvsGQgwWlSs8V-bN5mnzUzghZg9SpCL7TwEpfSLGUjOR4YxBqO8bCnOROpO7mB8mgFq3rNNgs5udnkQL/s1242/007.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="732" data-original-width="1242" height="236" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0TuSagLTQtW0blyN3YlM7H9iqm3mPDQf7VZdnXxh-vR54mWtnNeZf7zuOmQJjUvsGQgwWlSs8V-bN5mnzUzghZg9SpCL7TwEpfSLGUjOR4YxBqO8bCnOROpO7mB8mgFq3rNNgs5udnkQL/w400-h236/007.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A slice of what looks like the light cluster texture, and below one of the lighting buffers partially computed. Counting the pixels in the empty tiles, they seem to be 16x16 - while the clusters look like 32x32?</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><br /></div><span style="font-family: arial;">So - from what I can see it looks like a clustered deferred lighting system. </span><p></p><p style="text-align: justify;"><span style="font-family: arial;">The clusters seem to be 32x32 pixels in screen-space (froxels), with 64 z-slices. The lighting though seems to be done at a 16x16 tile granularity, all via compute shader indirect dispatches.</span></p><p style="text-align: justify;"><span style="font-family: arial;">I would venture this is because CS are specialized by both the materials and lights present in a tile, and then dispatched accordingly - a common setup in contemporary deferred rendering systems (e.g. see Call of Duty Black Ops 3 and Uncharted 4 presentations on the topic).</span></p><p style="text-align: justify;"><span style="font-family: arial;">Analytic lighting pass outputs two RGBA16 buffers, which seems to be diffuse and specular contributions. Regarding the options for scene lights, I would not be surprised if all we have are spot/point/sphere lights and line/capsule lights. Most of Cyberpunk's lights are neons, so definitely line light support is a must.</span></p><p style="text-align: justify;"><span style="font-family: arial;">You'll also notice that a lot of the lighting is unshadowed, and I don't think I ever noticed multiple shadows under a single object/avatar. I'm sure that the engine does not have limitations in that aspect, but all this points at lighting that is heavily "authored" with artists carefully placing shadow-casting lights. I would also not be surprised if the lights have manually assigned bounding volumes to avoid leaks.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjN1R3UOIwnXfVoC3txC57U8WqCyFH5XWhbz_Vl9uxcSVOLtXHXuYBzeA_EG7Wurn_-LOqWh22L-q1rTt5sEWvXK7swN3o9pVi_Oe9Y1j4pUBCMRBAMgl3iDUy3W-ohTh_aHHcYjNkggz_d/s1588/008.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="440" data-original-width="1588" height="178" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjN1R3UOIwnXfVoC3txC57U8WqCyFH5XWhbz_Vl9uxcSVOLtXHXuYBzeA_EG7Wurn_-LOqWh22L-q1rTt5sEWvXK7swN3o9pVi_Oe9Y1j4pUBCMRBAMgl3iDUy3W-ohTh_aHHcYjNkggz_d/w640-h178/008.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Final lighting buffer (for analytic lights) - diffuse and specular contributions.</td></tr></tbody></table><span style="font-family: arial;"><br /></span><span style="font-family: arial;"><b>Lighting part 2: Shadows</b></span><p></p><p style="text-align: justify;"><span style="font-family: arial;">But what we just saw does not mean that shadows are unsophisticated in Cyberpunk 2077, quite the contrary, there are definitely a number of tricks that have been employed, most of them not at all easy to reverse!</span></p><p style="text-align: justify;"><span style="font-family: arial;">First of all, before the depth-prepass, there are always a bunch of draws into what looks like a shadowmap. I suspect this is a CSM, but in the capture I have looked at, I have never seen it used, only rendered into. This points to a system that updates shadowmaps over many frames, likely with only static objects?</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKRxVqXmSE8YlELRv02rjbQTP4xUyOSL7SsLhg0YJSKVgRcta9Ec0vK9evNXEMZqbS91HzrmZt5t4HsrDDTsGbFnJxsniPe3_SDHLxHrG6XbWW1JhedK2hBbgrmFtBDllBlUWuMTDLmlvr//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="644" data-original-width="1350" height="306" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKRxVqXmSE8YlELRv02rjbQTP4xUyOSL7SsLhg0YJSKVgRcta9Ec0vK9evNXEMZqbS91HzrmZt5t4HsrDDTsGbFnJxsniPe3_SDHLxHrG6XbWW1JhedK2hBbgrmFtBDllBlUWuMTDLmlvr/w640-h306/009.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Is this a shadowmap? Note that there are only a few events in this capture that write to it, none that reads - it's just used as a depth-stencil target, if RenderDoc is correct here...</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"></span></div><p></p><p style="text-align: justify;"><span style="font-family: arial;">These multi-frame effects are complicated to capture, so I can't say if there are further caching systems (e.g. see the quadtree compressed shadows of Black Ops 3) at play. </span></p><p style="text-align: justify;"><span style="font-family: arial;">One thing that looks interesting is that if you travel fast enough through a level (e.g. in a car) you can see that the shadows take some time to "catch up" and they fade in incrementally in a peculiar fashion. It almost appears like there is a depth offset applied from the sun point of view, that over time gets reduced. Interesting!</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9J8Ao2cX0QMjVTg1aGA7DrGVSDBtskMOqtHUTfiVGYyBws8cblmriwGoGRtpLct29wZhjCtTH0yZ2RDIVTVxzWIEwgBu2dRoxqxIFUu5m5k1m750fQo4oTddaEvZwHUOG9sBwjGHM4Y5h/s1726/010.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="606" data-original-width="1726" height="224" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9J8Ao2cX0QMjVTg1aGA7DrGVSDBtskMOqtHUTfiVGYyBws8cblmriwGoGRtpLct29wZhjCtTH0yZ2RDIVTVxzWIEwgBu2dRoxqxIFUu5m5k1m750fQo4oTddaEvZwHUOG9sBwjGHM4Y5h/w640-h224/010.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">This is hard to capture in an image, but note how the shadow in time seems to crawl "up" towards the sun.</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><br /></div><span style="font-family: arial;">Sun shadows are pre-resolved into a screen-space buffer prior to the lighting compute pass, I guess to simplify compute shaders and achieve higher occupancy. This buffer is generated in a pass that binds quite a few textures, two of which look CSM-ish. One is clearly a CSM, with in my case five entries in a texture array, where slices 0 to 3 are different cascades, but the last slice appears to be the same cascade as slice 0 but from a slightly different perspective. </span><p></p><p style="text-align: justify;"><span style="font-family: arial;">There's surely a lot to reverse-engineer here if one was inclined to do the work!</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdu3haxhdzfvnezytpWnr-49CEdx9alvnSCEMA1tFKgqabuIMb2-d2PGUFGvIE9Le6pLQmhRxhJe4Y2D029vk9SohHru8qIQ7f8oNjXB7PKuZn-JPZCvFn9mTixfcOhdpscYpGQjzAowmS/s1499/Capture.PNG" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="591" data-original-width="1499" height="252" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdu3haxhdzfvnezytpWnr-49CEdx9alvnSCEMA1tFKgqabuIMb2-d2PGUFGvIE9Le6pLQmhRxhJe4Y2D029vk9SohHru8qIQ7f8oNjXB7PKuZn-JPZCvFn9mTixfcOhdpscYpGQjzAowmS/w640-h252/Capture.PNG" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The slices of the texture on the bottom (in red) are clearly CSM. The partially rendered slices in gray are a mystery. The yellow/green texture is, clearly, resolved screen-space sun shadows, I've never, so far, seen the green channel used in a capture.</td></tr></tbody></table><span style="font-family: arial;"><br /></span><span style="font-family: arial;">All other shadows in the scene are some form of VSMs, computed again incrementally over time. I've seen 512x512 and 256x256 used, and in my captures, I can see five shadowmaps rendered per frame, but I'm guessing this depends on settings. Most of these seem only bound as render targets, so again it might be that it takes multiple frames to finish rendering them. One gets blurred (VSM) into a slice of a texture array - I've seen some with 10 slices and others with 20.</span><p></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJ-phQypp7PXISAtQtw6FPdUNaysFAWF4WVA3VC13YdItUUUpnHMZZmkBWNxob1UHcrQsFdwU3JbkbBFmNtQI-futL0JpA7gMRQ32EjCyUTJOqZvyHueoTjCU9ypL3eAk0bsLAuwIomxVY//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="430" data-original-width="1297" height="212" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJ-phQypp7PXISAtQtw6FPdUNaysFAWF4WVA3VC13YdItUUUpnHMZZmkBWNxob1UHcrQsFdwU3JbkbBFmNtQI-futL0JpA7gMRQ32EjCyUTJOqZvyHueoTjCU9ypL3eAk0bsLAuwIomxVY/w640-h212/Capture1.PNG" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A few of the VSM-ish shadowmaps on the left, and artefacts of the screen-space raymarched contact shadows on the right, e.g. under the left arm, the scissors and other objects in contact with the plane...</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"></span></div><span style="font-family: arial;"><br /></span><span style="font-family: arial;">Finally, we have what the game settings call "contact shadows" - which are screen-space, short-range raymarched shadows. These seem to be computed by the lighting compute shaders themselves, which would make sense as these know about lights and their directions...</span><p></p><p style="text-align: justify;"><span style="font-family: arial;">Overall, shadows are both simple and complex. The setup, with CSMs, VSMs, and optionally raymarching is not overly surprising, but I'm sure the devil is in the detail of how all these are generated and faded in. It's rare to see obvious artifacts, so the entire system has to be praised, especially in an open-world game!</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>Lighting part III: All the rest...</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">Since booting the game for the first time I had the distinct sense that most lighting is actually not in the form of analytic lights - and indeed looking at the captures this seems to not be unfounded. At the same time, there are no lightmaps, and I doubt there's anything pre-baked at all. This is perhaps one of the most fascinating parts of the rendering.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRpbjtMBejy0MKqy7rbK0wd_cG6Xr5uMa01fMhkDWhXwyc0zgTZ5xI-RIbs3nCAhudQkU5SVlgG85KSVxGoYoZua7MWx91s6nvszQ2LpGbyOeRWqSJiUl7KCbFH4NJB4L5nASffOGBxMSO/s1652/011.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="598" data-original-width="1652" height="232" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRpbjtMBejy0MKqy7rbK0wd_cG6Xr5uMa01fMhkDWhXwyc0zgTZ5xI-RIbs3nCAhudQkU5SVlgG85KSVxGoYoZua7MWx91s6nvszQ2LpGbyOeRWqSJiUl7KCbFH4NJB4L5nASffOGBxMSO/w640-h232/011.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">First pass highlighted is the bent-cone AO for this frame, remaining passes do smoothing and temporal reprojection.</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><br /></div><span style="font-family: arial;">First of all, there is a very good half-res SSAO pass. This is computed right after the uber-depth-summarization pass mentioned before, and it uses the packed RGBA8 normal-roughness instead of the g-buffer one. </span><p></p><p style="text-align: justify;"><span style="font-family: arial;">It looks like it's computing bent normals and aperture cones - impossible to tell the exact technique, but it's definitely doing a great job, probably something along the lines of HBAO-GTAO. First, depth, normal/roughness, and motion vectors are all downsampled to half-res. Then a pass computes current-frame AO, and subsequent ones do bilateral filtering and temporal reprojection. The dithering pattern is also quite regular if I had to guess, probably Jorge's Gradient noise?</span></p><p style="text-align: justify;"><span style="font-family: arial;">It's easy to guess that the separate diffuse-specular emitted from the lighting pass is there to make it easier to occlude both more correctly with the cone information.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4T0YIJKQNY2CI4hMIFAopAjR86Lsp48XvaWtvMMVypwWdSsr4B8qXiqVVY13aI0oPJ_TYpsOFSjdVOJfkqsqSbtvNHJ624kzluVHcodeYij2Wp5hRy4tP0LULkkT2XJWoELmnwctPKQq6//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="592" data-original-width="654" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4T0YIJKQNY2CI4hMIFAopAjR86Lsp48XvaWtvMMVypwWdSsr4B8qXiqVVY13aI0oPJ_TYpsOFSjdVOJfkqsqSbtvNHJ624kzluVHcodeYij2Wp5hRy4tP0LULkkT2XJWoELmnwctPKQq6/w400-h362/012.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">One of many specular probes that get updated in an array texture, generating blurred mips.</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"></span></div><span style="font-family: arial;"><br /></span><span style="font-family: arial;">Second, we have to look at indirect lighting. After the light clustering pass there are a bunch of draws that update a texture array of what appear to be spherically (or dual paraboloid?) unwrapped probes. Again, this is distributed across frames, not all slices of this array are updated per frame. It's not hard to see in captures that some part of the probe array gets updated with new probes, generating on the fly mipmaps, presumably GGX-prefiltered. </span><p></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgn77lqwfzGBPka7V5POHrs1WS6ZwvPdQ7U3L-CNr5B1LK8C8Ixn3A-6xOCTHvixRJP0qNAHICpKWobFTcpOmZIs3O6Hgb8Rn9USWxkUwFc_WugKAo80xbJg6rroh8HqEZrpcdKWYkKh83N//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="572" data-original-width="1312" height="175" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgn77lqwfzGBPka7V5POHrs1WS6ZwvPdQ7U3L-CNr5B1LK8C8Ixn3A-6xOCTHvixRJP0qNAHICpKWobFTcpOmZIs3O6Hgb8Rn9USWxkUwFc_WugKAo80xbJg6rroh8HqEZrpcdKWYkKh83N/w400-h175/013.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A mysterious cubemap. It looks like it's compositing sky (I guess that dynamically updates with time of day) with some geometry. Is the red channel an extremely thing g-buffer?</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"></span></div><p></p><p style="text-align: justify;"><span style="font-family: arial;">The source of the probe data is harder to find though, but in the main capture I'm using there seems to be something that looks like a specular cubemap relighting happening, it's not obvious to me if this is a different probe from the ones in the array or the source for the array data later on. </span></p><p style="text-align: justify;"><span style="font-family: arial;">Also, it's hard to say whether or not these probes are hand placed in the level, if the relighting assumption is true, then I'd imagine that the locations are fixed, and perhaps artist placed volumes or planes to define the influence area of each probe / avoid leaks.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifta6S4M50FJ8iQhPMJo4YaT1nbUSoJYy9vIQpFAmQqT8YgezfHifQHwouMqGVY95RZXsMa-9wzONbqPpdQwJqAnpUvBFDdzCVXwi8RmyZ4v8Z6cu6OlpVdUxUcxBgwHcD9TLqLQvPiGQ4//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="572" data-original-width="1542" height="238" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifta6S4M50FJ8iQhPMJo4YaT1nbUSoJYy9vIQpFAmQqT8YgezfHifQHwouMqGVY95RZXsMa-9wzONbqPpdQwJqAnpUvBFDdzCVXwi8RmyZ4v8Z6cu6OlpVdUxUcxBgwHcD9TLqLQvPiGQ4/w640-h238/014.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A slice of the volumetric lighting texture, and some disocclusion artefacts and leaks in a couple of frames.</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"></span></div><span style="font-family: arial;"><br /></span><span style="font-family: arial;">We have your "standard" volumetric lighting, computed in a 3d texture, with both temporal reprojection. The raymarching is clamped using the scene depth, presumably to save performance, but this, in turn, can lead to leaks and reprojection artifacts at times. Not too evident though in most cases.</span><p></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9F4CHOnSnTF_jNqXOyBW25QbNDGa28sRHrRG34uuDW1AGP0oWHf2glsg5dmHMFvxeWvNj3wC0xKS3nKxyFnkEHn9_OSidhessylhbTDTIB7L4WLj207ngqvjECyQinpXBROuJV98BGLQR/s1378/015.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="774" data-original-width="1378" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9F4CHOnSnTF_jNqXOyBW25QbNDGa28sRHrRG34uuDW1AGP0oWHf2glsg5dmHMFvxeWvNj3wC0xKS3nKxyFnkEHn9_OSidhessylhbTDTIB7L4WLj207ngqvjECyQinpXBROuJV98BGLQR/w640-h360/015.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Screen-Space Reflections</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;">Now, things get very interesting again. First, we have an is an amazing Screen-Space Reflection pass, which again uses the packed normal/roughness buffer and thus supports blurry reflections, and at least at my rendering settings, is done at full resolution. </span></p><p style="text-align: justify;"><span style="font-family: arial;">It uses previous-frame color data, before UI compositing for the reflection (using motion vectors to reproject). And it's quite a lot of noise, even if it employs a blue-noise texture for dithering!</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUSjq8P8EGGAOezOnx-LExGc6bF5N29VqvXjLOd1Kcuu6lRedChApZ1HwZ_H8LTW62W8v7GQUcvTowm4I4a5NvF-ogtKLsDISj8ARrLfq_eZh86CUKa9_C4CFQ_uOLzMO5l3ehEeOeswVT//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="710" data-original-width="1314" height="346" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUSjq8P8EGGAOezOnx-LExGc6bF5N29VqvXjLOd1Kcuu6lRedChApZ1HwZ_H8LTW62W8v7GQUcvTowm4I4a5NvF-ogtKLsDISj8ARrLfq_eZh86CUKa9_C4CFQ_uOLzMO5l3ehEeOeswVT/w640-h346/016.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Diffuse/Ambient GI, reading a volumetric cube, which is not easy to decode...</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"></span></div><span style="font-family: arial;"><br /></span><span style="font-family: arial;">Then, a indirect diffuse/ambient GI. Binds the g-buffer and a bunch of 64x64x64 volume textures that are hard to decode. From the inputs and outputs one can guess the volume is centered around the camera and contains indices to some sort of computed irradiance, maybe spherical harmonics or such. </span><p></p><p style="text-align: justify;"><span style="font-family: arial;">The lighting is very soft/low-frequency and indirect shadows are not really visible in this pass. This might even by dynamic GI!</span></p><p style="text-align: justify;"><span style="font-family: arial;">Certainly is volumetric, which has the advantage of being "uniform" across all objects, moving or not, and this coherence shows in the final game.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmeC0YversKNiO5FEbB_CiH9VK9WZenWOZIJfO7Cw54UEvHaKA3sn1UUD6-NwX3MsSlRATUgcq-z0KDuluKx6-ARwfXic97n8KEymYL-FaArqnvhc-XXKuIxR3xv7_VA0J5DMxXUCcSwNz/s1708/017.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="482" data-original-width="1708" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmeC0YversKNiO5FEbB_CiH9VK9WZenWOZIJfO7Cw54UEvHaKA3sn1UUD6-NwX3MsSlRATUgcq-z0KDuluKx6-ARwfXic97n8KEymYL-FaArqnvhc-XXKuIxR3xv7_VA0J5DMxXUCcSwNz/w640-h180/017.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Final lighting composite, diffuse plus specular, and specular-only.</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><br /></div><span style="font-family: arial;">And finally, everything gets composited together: specular probes, SSR, SSAO, diffuse GI, analytic lighting. This pass emits again two buffers, one which seems to be final lighting, and a second with what appears to be only the specular parts.</span><p></p><p style="text-align: justify;"><span style="font-family: arial;">And here is where we can see what I said at the beginning. Most lighting is not from analytic lights! We don't see the usual tricks of the trade, with a lot of "fill" lights added by artists (albeit the light design is definitely very careful), instead indirect lighting is what makes most of the scene. This indirect lighting is not as "precise" as engines that rely more heavily on GI bakes and complicated encodings, but it is very uniform and regains high-frequency effects via the two very high-quality screen-space passes, the AO and reflection ones.</span></p><span style="font-family: arial;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnDwGl-pMh9S-kwLdt9Lc6ehmbEbqoa2mak7Qud3LLeYgbgTmsRnvNX5dbypAFnqct-CcZOKiTI4VN4u_QYnTXoxouaSpO2ZMNL5tyn9cmbP_wrn8M4bpR4Vjj0DXaLSbZvEPJBFDddBli/s1519/Capture2.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="565" data-original-width="1519" height="238" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnDwGl-pMh9S-kwLdt9Lc6ehmbEbqoa2mak7Qud3LLeYgbgTmsRnvNX5dbypAFnqct-CcZOKiTI4VN4u_QYnTXoxouaSpO2ZMNL5tyn9cmbP_wrn8M4bpR4Vjj0DXaLSbZvEPJBFDddBli/w640-h238/Capture2.PNG" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgg3uRGbyrHEHfXENo8twyRc8GXnNS-zOuGrnwfloTISac0chkz6q8TsN24XRZfLRPk3AFR7PqFKd9DdNa3nv2zRtOCrAjecjP7_ac6CFnTgHuJAOfaOuNNIK87QqkgQXLo_CKlGHLeYDqK/s1522/Capture3.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="560" data-original-width="1522" height="236" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgg3uRGbyrHEHfXENo8twyRc8GXnNS-zOuGrnwfloTISac0chkz6q8TsN24XRZfLRPk3AFR7PqFKd9DdNa3nv2zRtOCrAjecjP7_ac6CFnTgHuJAOfaOuNNIK87QqkgQXLo_CKlGHLeYDqK/w640-h236/Capture3.PNG" width="640" /></a></div></span><p style="text-align: justify;"><span style="font-family: arial;">The screen-space passes are quite noisy, which in turn makes temporal reprojection really fundamental, and this is another extremely interesting aspect of this engine. Traditional wisdom says that reprojection does not work in games that have lots of transparent surfaces. The sci-fi worlds of Cyberpunk definitely qualify for this, but the engineers here did not get the news and made things work anyway!</span></p><p style="text-align: justify;"><span style="font-family: arial;">And yes, sometimes it's possible to see reprojection artifact, and the entire shading can have a bit of "swimming" in motion, but in general, it's solid and coherent, qualities that even many engines using lightmaps cannot claim to have. Light leaks are not common, silhouettes are usually well shaded, properly occluded.</span></p><p style="text-align: justify;"><span style="font-family: arial;"><b>All the rest</b></span></p><p style="text-align: justify;"><span style="font-family: arial;">There are lots of other effects in the engine we won't cover - for brevity and to keep my sanity. Hair is very interesting, appearing to render multiple depth slices and inject itself partially in the g-buffer with some pre-lighting and weird normal (fake anisotropic?) effect. Translucency/skin shading is surely another important effect I won't dissect.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5NFfAdTJAWBNSuydlaGKoLeh2ETtU95XbvBASgkW1SJuW0e47Dk1mDH6YkExNGgaep4IW85VBi5jCWaUNtxc_w-IDP9X2SMMlnX4G_oQ_UlB-unPpMswK7asjxG3t6sselRVQBoOtn8lW/s1302/018.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="498" data-original-width="1302" height="244" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5NFfAdTJAWBNSuydlaGKoLeh2ETtU95XbvBASgkW1SJuW0e47Dk1mDH6YkExNGgaep4IW85VBi5jCWaUNtxc_w-IDP9X2SMMlnX4G_oQ_UlB-unPpMswK7asjxG3t6sselRVQBoOtn8lW/w640-h244/018.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Looks like charts caching lighting...</td></tr></tbody></table><p style="text-align: justify;"><span style="font-family: arial;">Before the frame is over though, we have to mention transparencies - as more magic is going on here for sure. First, there is a pass that seems to compute a light chart, I think for all transparencies, not just particles.</span></p><p style="text-align: justify;"><span style="font-family: arial;">Glass can blur whatever is behind them, and this is done with a specialized pass, first rendering transparent geometry in a buffer that accumulates the blur amount, then a series of compute shaders end up creating three mips of the screen, and finally everything is composited back in the scene.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEji0GDFRexVTH762l6xuK5sD7Xy06peMPCmXgf5Z-G5lrVlVl4yzFe8ey6komnmWHQ0YIbrgeGCKzJN7mE7n-rpxSSe5Ahf1tXM4BZ_gfQvGRHh4Q40msW9RPrNSfacx-kuRXgmRty40u7L/s1431/Capture4.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="619" data-original-width="1431" height="277" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEji0GDFRexVTH762l6xuK5sD7Xy06peMPCmXgf5Z-G5lrVlVl4yzFe8ey6komnmWHQ0YIbrgeGCKzJN7mE7n-rpxSSe5Ahf1tXM4BZ_gfQvGRHh4Q40msW9RPrNSfacx-kuRXgmRty40u7L/w640-h277/Capture4.PNG" width="640" /></a></div><br /><span style="font-family: arial;">After the "glass blur", transparencies are rendered again, together with particles, using the lighting information computed in the chart. At least at my rendering settings, everything here is done at full resolution.</span><p></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcIZBSG0d4yG95ZKHzab_2aw5Ynh9b9TmJsfyDdf-heuLM7VMqWgdSygILYAIdHiH6ZPF-q1Raxe0v1XgciD28bk3PS2BvlUcU_4vG-V7WPFaOMTPaRtxDHSAp_1kB3LAKYu1VHNSVBJwe/s1052/Capture5.PNG" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="640" data-original-width="1052" height="390" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcIZBSG0d4yG95ZKHzab_2aw5Ynh9b9TmJsfyDdf-heuLM7VMqWgdSygILYAIdHiH6ZPF-q1Raxe0v1XgciD28bk3PS2BvlUcU_4vG-V7WPFaOMTPaRtxDHSAp_1kB3LAKYu1VHNSVBJwe/w640-h390/Capture5.PNG" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Scene after glass blur (in the inset) and with the actual glass rendered on top (big image)</td></tr></tbody></table><span style="font-family: arial;"><br /></span><span style="font-family: arial;">Finally, the all-mighty temporal reprojection. I would really like to see the game without this, the difference before and after the temporal reprojection is quite amazing. There is some sort of dilated mask magic going on, but to be honest, I can't see anything too bizarre going on, it's astonishing how well it works. </span><p></p><p style="text-align: justify;"><span style="font-family: arial;">Perhaps there are some very complicated secret recipes lurking somewhere in the shaders or beyond my ability to understand the capture.</span></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiels48M6iO5V0Sh9OcUPIkV8XP9TL1qOoyrIuWgG0WtjjRqA7yqrCD4JzCggEu190jss1PaYIEusb9619X28qCE4HS0z8WzlP5wbIh2c9KiRcHSgwGP5Es31FC6zhTYh-6Vum2FMNjO116/s1499/Capture6.PNG" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="625" data-original-width="1499" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiels48M6iO5V0Sh9OcUPIkV8XP9TL1qOoyrIuWgG0WtjjRqA7yqrCD4JzCggEu190jss1PaYIEusb9619X28qCE4HS0z8WzlP5wbIh2c9KiRcHSgwGP5Es31FC6zhTYh-6Vum2FMNjO116/w640-h266/Capture6.PNG" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">On the left, current and previous frame, on the right, final image after temporal reprojection.</td></tr></tbody></table><p></p><p style="text-align: justify;"><span style="font-family: arial;"></span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitvH9OPWKGcgVY6mVwEgCFgzPmlQDe-0oHTDM81Q9Z6Gi7A9aPUN5oQHRXnQdhWOHLfRCx_hl7GLqm6pjsNMP_CzEF-7ERukBzlcAIInMHeepa4Ho-ZiwwiOYojbZEtefiZNUWnQnSNXPh//" style="margin-left: auto; margin-right: auto;"><img alt="" data-original-height="526" data-original-width="932" height="226" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitvH9OPWKGcgVY6mVwEgCFgzPmlQDe-0oHTDM81Q9Z6Gi7A9aPUN5oQHRXnQdhWOHLfRCx_hl7GLqm6pjsNMP_CzEF-7ERukBzlcAIInMHeepa4Ho-ZiwwiOYojbZEtefiZNUWnQnSNXPh/w400-h226/Capture7.PNG" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">This is from a different frame, a mask that is used for the TAA pass later on...</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><span style="font-family: arial;"></span></div><span style="font-family: arial;"><br />I wrote "finally" because I won't look further, i.e. the details of the post-effect stack, things here are not too surprising. Bloom is a big part of it, of course, almost adding another layer of indirect lighting, and it's top-notch as expected, stable, and wide. <br /></span><p></p><p style="text-align: justify;"><span style="font-family: arial;">Depth of field, of course, tone-mapping and auto-exposure... There are of course all the image-degradation fixings you'd expect and probably want to disable: film grain, lens flares, motion blur, chromatic aberration... Even the UI compositing is non-trivial, all done in compute, but who has the time... Now that I got all this off my chest, I can finally try to go and enjoy the game! Bye!</span></p></div>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com13tag:blogger.com,1999:blog-6950833531562942289.post-29480692199745191102020-12-01T17:22:00.003-08:002020-12-01T17:23:50.662-08:00Digital Dragons 2020<span style="font-family: arial;">Slides from my two presentations at this year's Digital Dragons. Enjoy!<br /><br /><a href="https://www.dropbox.com/s/0cn9t10xy8f69yz/DD-OpenProblems.pdf?dl=0">Open Problems in Real-Time Rendering</a></span><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">A.k.a. Are we solving the wrong problems today? </span><span style="font-family: arial;">We have been looking a lot at certain bits and pieces of math, but are we looking at the -right- pieces? </span><span style="font-family: arial;">A reminder that math and physics don't matter, if they don't solve actual problems for people - artists, or players.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4l_3EHeP2VlbrJZArdz0NUqby7V_cTrvb7JazUKQhawWbgMJFE7ksqg-AYE7oJjLeCTi1B61sykwWqqbfjdwlV3iwhES6JWC02hnyjMBqVXUco2_5OKgXCoglIO2rvwmWvXd31sNcwQM4//" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="704" data-original-width="1254" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4l_3EHeP2VlbrJZArdz0NUqby7V_cTrvb7JazUKQhawWbgMJFE7ksqg-AYE7oJjLeCTi1B61sykwWqqbfjdwlV3iwhES6JWC02hnyjMBqVXUco2_5OKgXCoglIO2rvwmWvXd31sNcwQM4/w400-h225/image.png" width="400" /></a></div></span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><i>Note: Naty Hoffman in his presentation at <a href="https://blog.selfshadow.com/publications/s2020-shading-course/">Siggraph 2020 physically based shading course</a> makes some similar (better!) remarks while he shows how we don't know Fresnel very well... Must read!</i></span></div><div><span style="font-family: arial;"><br /></span><div><span style="font-family: arial;"><a href="https://www.dropbox.com/s/9z7p6kg3utjzf7f/dd-metaverse.pdf?dl=0">Rendering the Metaverse across Space and Time.</a></span></div></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;">An introduction to Roblox and how its design and mission shapes the work we do to render shared worlds on almost any device, any graphics API - and how we Roblox has been achieving that for more than a decade now.</span></div><div><span style="font-family: arial;"><br /></span></div><div><span style="font-family: arial;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL1CiQIgtpMk-12EgEQTd9U5DDT2WycT-4d6QTmzleZAPpvpF_9ZIg-_GnzRYy7JH363bD6b_Q9RywdkFlYQDUY04d-_yk4QKGyLqyKEWXh00SmBJHZsnwMPOhKeckVEam2Tu7Oajr8aps//" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="600" data-original-width="1094" height="220" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL1CiQIgtpMk-12EgEQTd9U5DDT2WycT-4d6QTmzleZAPpvpF_9ZIg-_GnzRYy7JH363bD6b_Q9RywdkFlYQDUY04d-_yk4QKGyLqyKEWXh00SmBJHZsnwMPOhKeckVEam2Tu7Oajr8aps/w400-h220/image.png" width="400" /></a></div><br /><br /><br /></span></div>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com1tag:blogger.com,1999:blog-6950833531562942289.post-54575006238599774392020-11-25T14:26:00.007-08:002020-12-01T17:05:37.375-08:00Baking a Realistic Renderer from Scratch and other resources for Beginners in Computer Graphics<p><span style="font-family: arial;">Dump of a few things I got that can be useful for beginners in 3D Computer Graphics programming.</span></p><p></p><ul style="text-align: left;"><li><span style="font-family: arial;">Download <a href="https://www.dropbox.com/s/e1zmz4n85xngaxw/3D%20Computer%20Graphics%20Resources%20for%20Beginners.pdf?dl=0">a snapshot of my "3D Computer Graphics for Beginners"</a> curated collection of projects and resources. I know all the cool kids do this in GitHub and would call it "awesome something" - but I'm lazy and a contrarian so what you get is am ugly PDF made from a google docs page :)</span></li></ul><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqyhEPN5TrIFRmVjkk2AFBYxyU6xCsfmtGwoPeZUVfjYgPs9lT4KguquntvUe-2en4h52N7oAmTnQo01Ily57_J_TymUUuYgxvl_GAUgu-E5fQ0TcKeBM_D8blOItEwIuKblcLNNva3fnv//" style="font-family: arial; margin-left: 1em; margin-right: 1em; text-align: center;"><img alt="" data-original-height="512" data-original-width="1188" height="173" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqyhEPN5TrIFRmVjkk2AFBYxyU6xCsfmtGwoPeZUVfjYgPs9lT4KguquntvUe-2en4h52N7oAmTnQo01Ily57_J_TymUUuYgxvl_GAUgu-E5fQ0TcKeBM_D8blOItEwIuKblcLNNva3fnv/w400-h173/image.png" width="400" /></a></div><div></div><ul style="text-align: left;"><li><span style="font-family: arial;">Past two weeks I was invited by my master's thesis professor, Andrea Abate, to give a (virtual) seminar at my alma mater, the University of Salerno. You can grab the materials I've made for this here:</span></li><ul><li><span style="font-family: arial;">Part 1: <a href="https://www.dropbox.com/s/9nncapjrmp5lb1k/UNISA_1_GPU.pdf?dl=0">Where do GPUs come from</a>. This is a rehash of a talk that I've been doing for a while now, I've posted at least another time about this <a href="http://c0de517e.blogspot.com/2017/05/where-do-gpus-come-from.html">here</a>.</span></li><li><span style="font-family: arial;">Part 2: <a href="https://www.dropbox.com/s/dzya6p9pj92tq4y/UNISA_2_RENDERING.pdf?dl=0">Baking a Realistic Renderer from scratch</a>. This one is novel - and gives the title to this post. In the slides, we go from zero to a path tracer in about two hours, doing the necessary math step by step and (live) coding the renderer in C, via <a href="https://github.com/anael-seghezzi/CToy">CToy</a>. <a href="https://www.dropbox.com/s/19gkv65ly3z6gzy/UNISA_RT_CTOY.zip?dl=0">Here is the CToy build I used together with the sourcecode for all the steps</a>.</span></li></ul></ul><div><span style="font-family: arial;"><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiHBW7t3FkrWmKsa6NH35_L2OlOEWTeuG4RDLHlEbNZoIJAtDITTNL-uefLMi4Gyxjrn_OaQqErlT9DdrLqCja9MS5ek1BWb52r-AYXbNi3vbG3JK0ktA_3L3V9MMok20jrRRNYNy8Qj6W//" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="1240" data-original-width="1588" height="313" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiHBW7t3FkrWmKsa6NH35_L2OlOEWTeuG4RDLHlEbNZoIJAtDITTNL-uefLMi4Gyxjrn_OaQqErlT9DdrLqCja9MS5ek1BWb52r-AYXbNi3vbG3JK0ktA_3L3V9MMok20jrRRNYNy8Qj6W/w400-h313/image.png" width="400" /></a></div></div><div><span style="font-family: arial;"><br /></span></div>If you're in the States, maybe you can fine here something to tinker with during this self-isolated thanksgiving. Enjoy your holidays!<br /><br /></span></div><p></p>DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com0tag:blogger.com,1999:blog-6950833531562942289.post-17993771451453674092020-06-05T14:43:00.003-07:002020-06-06T22:41:47.312-07:00OT: To my white friends. A few words on #BLM.<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">Hi all.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">We all see what's going on in the US these days, so I'll cut to the chase. This isn't going to be a lecture, do not worry, nor I'm turning this blog into a political platform.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">But, I realized over the years that I've not been always part of the solution for our ongoing social issues, and I hope some of the following can help others.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<em style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">This is not about where to donate or what to do or not to do to help. </em></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">This is about how to help yourself. It is our responsibility to solve our own conflicts, and only through that, we can be good to others.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<strong style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">Forgive yourself. Then move forward.</strong></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">This is not easy. There is a reason why we minimize social issues, why our brains are always trying to see a justification. Yes, it sucks, but... But he shouldn't have done this. But let's see how it goes. We have to hear from both sides. But, but but...</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">You're fighting against yourself. We are all privileged in some way, we truly are.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">To be open to that means shaking your foundation. We've been given something that we didn't do anything to deserve. But we do deserve things! We are smart, we worked hard, we have our issues, we have our self-worth. We matter!</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">Saying the word "privilege" is an attack, no matter if it registers consciously or not, it erodes some of our self-worth. Our brains, our very own humanity, is not built to accept it.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">Have you ever looked at a person on the street and felt an idea forming in your mind, that perhaps they did something to deserve it. Perhaps their situation is a result of their actions?</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">That's the mechanism in action. We cannot accept that we are not the masters of our lives, that we don't have control. It's scary and uncomfortable, and we are built to resist it. What meaning does life has if the biggest differences between people are left to the toss of a coin?</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">This is not about justifying our actions or inactions. It's not about moral judgement at all. It's about understanding how ego and feeling work, understand that they are part of humanity and for a good reason.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">But also, once you truly understand it, you can move past them, not by suppressing them, but by knowing they exist for a reason, and not having them dominate your choices.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">This is not about judgment, it's not about being nice either. It's a continuous self-reflection that helps us live consciously. It's not easy, it's definitely uncomfortable, and there is no victory to be had.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">It is, what some might call, a practice, or a philosophy. You cannot win or lose, fail or succeed, because judgment is not part of any of this. It's about being conscious of our own humanity.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<strong style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">All lives matter?</strong></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">That said, I want to address a couple of issues that I think stem from our egos, maybe you will agree.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">The first is the phrase "all lives matter", which is always uttered by us, white people. Now, given the above, you might know where I'm going.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">Why do people say this? Are they racist? What does it mean to be racist anyway? Is being fair racist now? Let's try to untangle these things.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">First, you have to realize that this is an emotional response, that your brain masks with rationalization. Wait a second, hear me out, don't get defensive now. I think you can see how the phrase is at face value, quite silly.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">When an acquaintance faces a loss, we say, I'm sorry for your loss. We do not say, I'm sorry for all the people that lost others in the world. Imagine being on the receiving end of a similar rebuttal. You feel strongly about something, and someone feels the need for interjecting and shifting the attention. You see the problem?</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">And it's a rationalization. It's obvious that "black lives matter" does not subtract from the worthiness of others, but it shines a light to a specific imbalance of power. A picture is worth a thousand words:</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhptXo_NeWNz72kXh-lPEUOI6PyviyGwtaLcQAppyCel7aK0r3zY5t68XN2UoLXZ3zlzLbv8o9yXHm0hzWDk9rvyAmOiWtDvL_u2Gfgov5s1oyvv045tufCYnvuuUAG7uVOhR_W4iwsCsOl/s1600/equity-300x225.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="225" data-original-width="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhptXo_NeWNz72kXh-lPEUOI6PyviyGwtaLcQAppyCel7aK0r3zY5t68XN2UoLXZ3zlzLbv8o9yXHm0hzWDk9rvyAmOiWtDvL_u2Gfgov5s1oyvv045tufCYnvuuUAG7uVOhR_W4iwsCsOl/s1600/equity-300x225.jpg" /></a></div>
<br />
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">The "all lives matter" crowd is not about good or bad people, shades of racism. It's a natural response that we have. The feeling is human, but it is not a justification. It is our own responsibility to learn how to deal with feelings, understand what are they telling us, and act after we observed them, not driven by them.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<strong style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">There are bad people on both sides.</strong></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">Now that we got the easy one out, this is going to be a bit more of a challenge.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">How do we feel about looting? Here we have a divide, the uncomfortable line. Should I say ACAB? Can I justify riots, destruction? What do they want to achieve! Don't they realize they're undermining the message, the mission? I stand for political change, for laws being passed, the bad apples removed, this doesn't help at all!</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">I understand. And you're right. But, realize this is a matter of perspective. And hold on! Because I'm not going to justify anyone here, just try to look at the problem from a different angle.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">The reason why we want change, actual action, push for laws, make donations, perhaps even study the problem in papers and publications, understand how's best to spend money, de-fund police and prioritize social programs and so on...</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">All these thoughts I understand, I grew up in a middle class, intellectual, left-wing, progressive family. I get fighting the good fight, being involved, I get socialism and welfare.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">What I didn't get, until recently, is how other people didn't see the same. How is it that even if we want the same objective, some people are looting and some people are reading?</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">In retrospect, the answer is obvious. It's because I believe in society. And why wouldn't I? It always worked great for me.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">The thing I cannot see is the perspective of someone that doesn't believe in it anymore. That is not willing to fix the social contract, it's just ok to tear it up because it never worked. I can imagine the scenarios. What if my life didn't matter much anymore? Would I be thinking of what's best for the world, or just watch it burn?</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">Understand that you will never fully understand. That it is not the responsibility of others to explain either, nor it justifies actions. Actions will be the perspective of history, and what in the end did or not change society.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">I think that looting will probably not help, it's actually a complex effect like anything real world is, that's not the point.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">The point is that we cannot escape living lives from a given perspective. It's not because we're good or bad, it is yet another limitation of being human.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">If we get that, then hopefully we can understand how we feel, understand that others feel, and have less a need to justify nor judge.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">Lastly, when it doubt, take responsibility. You might think "butwhatabout... X", or "yes, but...Y". And you're right. X and Y matter, it is true, it's not crazy, it's not a lie. But check if X and Y are about others, other communities, other people's behaviors, farther from you.</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<span data-preserver-spaces="true" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">Then, understand that responsibility starts with us, radiates from us. Again, this does not justify or diminish X and Y. They are still true, and important. But, as a great philosopher once said "I'm starting with the man in the mirror".</span></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<br /></div>
<div style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;">
<em style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; margin-bottom: 0pt; margin-top: 0pt;">As all my posts are, this is improvised, and I might revisit it in time. Stay safe. Love you all.</em></div>
DEADC0DEhttp://www.blogger.com/profile/01477408942876127202noreply@blogger.com14