Search this blog

24 March, 2019

GDC 2019 - Everyday (shallow) ML

Here are the slides for my talk in the GDC 2019 Machine Learning tutorial day. 
Lots of slides, many more than what was shown on stage...



Plus! 

Code for my "nvgPrint", a nanoVG/OpenGL c++ library for super simple real-time, asynchronous plotting in C++.

Grab it till quantities last!


11 March, 2019

Rendering doesn’t matter anymore?

Apologies. I wanted to resist the clickbait title, but I couldn’t find anything much better...

And no, I’m not renouncing my ways as a rendering engineer, I’m not going to work on build systems or anything like that. Nor do I believe that real-time rendering has “peaked” or that our pace and progress in image quality has seen slowdowns. There is still a ton of work to do, and the difference between good and bad graphics can be dramatic...

But what I want to talk about a bit more (I mentioned this in my previous post) is what matters, and how do we decide that. ROI, perhaps an ugly term, but it gets the job done.

From product.

I’ve spent most of my now thirteen-old professional career in videogames working on production teams. A.k.a. making games. And lots of games I’ve helped making, I actually average a game per year, even when I was in production, which is quite unusual I guess.

Now, when you are in production, things are relatively simple. Ok, no, they are everything but. What I mean is that is straightforward... Ok, maybe still not the best description.

You start with some sort of rough plan. Hopefully, the creative persons have ideas, they present them to you, and you start making a sketch. What are the risks, things to experiment first, what are tasks that are more well known.

Unless you are bootstrapping an engine from scratch or doing major tech changes, mostly you’ll be asked for a ton of features, things people want. An unreasonable amount of them. Ludicrous.

So you go on and prioritize, estimate, shuffle things until you have some plan that makes sense. It won’t, but we know that, we start working and as things change, we re-adjust that plan, kicking features off the list and moving thinks up the priority...

So you get a gigantic amount of work to do, you get on the ride and off you go, fighting fires as they happen, course-adjusting and bracing yourself for the landing. For the most part. There are some other skills involved here and there, but mostly it’s about steering this huge ship that has both a ton of momentum and the worst controls ever.

Naturally, there isn’t much time to think about philosophical questions and other bullshit like that. In fact, plenty of times the truth is that you start losing control over the priorities, even.

That neat idea of reshuffling your list becomes more like a rough sort, and you don’t even necessarily have time or energy to understand why people who are asking for things need these things...


Production, on a good day.

If you go around and look at big enough productions, one pattern you will notice is that people start working without knowing the “why” of things. Which leads, no need to say, to quite sub-optimal solutions. But the production beast is an organic one, it’s unclean, it’s made by people and opinions and blood and sweat. Engineering is the art of handling all that and still shipping a great game, and it looks nothing like any idealized version of beauty some programmers might hold dear.

To technology.

Then you move to some cushy job in some central technology department, right? And now you have a problem. You have time, at least, sometimes.

You might want to work on things that help, or have a chance to help, more than a single product. If you do R&D, you will be doing things that have more risks and unknowns. In general, you aren’t so strongly tied to that list of features people are shuffling around day after day. Even when you are doing the only reasonable thing, which is to be attached to a product, you are not that close, you can’t be as you’re not part of the core team.

This is an opportunity because you can have some time and freedom, but also a huge risk because, in the end, the product is all that matters. Being singularly focused on production is not necessarily the best strategy for great products, because that monster swallows and consumes everything, focused on getting “more”, but straying away too much is the road to masturbatory efforts that can be irrelevant at best, dangerous most often.

So, you start thinking of ROI. What should I do? What’s best? You probably have things from multiple teams that could be done, and you have other things that you can persuade teams they should want...

In my case, being a rendering person, the question boils down to, what matters in rendering? How do I estimate how much a thing weights? When you move from “vfx artists want this particle trail thing and you have to do it tomorrow” to look at things with an iota of horizon, how do you decide?

Rendering doesn’t matter...

...like it used to. Once upon a time, rendering made the games. Even more than that, entire genres. Doom, of course, is the obvious example, but there are many. The CD-ROM FMV game era. The hardware sprite and scrolling background fuelled platformers, shooters, and so on.


Chances are your engine won't create the next big videogame genre.

Then that ended, we arrived at a point where we had enough computing hardware that videogame genres are not defined by technology anymore. Perhaps this will change with VR/AR but for now let’s ignore them (they’re not hard to ignore either, these days).

But we still had a period where technology could be product defining. Call of Duty running at 60fps on ps3 and 360, for example, was quite unique, and that technical characteristic was instrumental to the product. Today doing a 60fps title is the norm, to ship at 30 is almost a gutsy move...

Rendering is thus restricted in the narrower field of aesthetics. It’s just... graphics. Sad if you think of that, right?

Well of course not! We have an ace up our sleeves, see. It’s true that technology is not genre-defining anymore, but AAA productions are insanely graphic-intensive. We love our computer graphics, and the amount of people dedicated to their care and feed is enormous. Everything is good again in the universe, rendering engineering reigns supreme.

So this is the first order of attack of the ROI problem. There are lots of things that are measurable in people and hours and dollars. These, pretty much, will automatically win over anything else. Let’s put them in the bucket of “really important stuff”.

By the way, when I say “measurable”, I don’t mean you can measure them or that you will. You most definitely will not! What I mean is that you could think of them and have a strong feeling they relate to said measurable quantities...

Chasing shiny things.

So I said you can bucket things. Things that are required to ship the game first. Things that help people second. Third, you get all your shiny things, which are, incidentally today what you could call graphics R&D. A good part of the stuff I do!

Should we stop doing that? No, of course I will never admit to that, c'mon.

But more seriously, it obviously can’t be that simple. There will never be an end to thing that “help people”, even if the best possible scenario you can still make progress, nothing is ever perfect. So obviously you will reach a point where some rendering effect trumps a tiny pipeline improvement, at least that is a given!

Moreover, though it is not that computer-graphic techniques, even when they are purely visual, do not help content production. We could point at the obvious trend of physically-based rendering, and how that helped (after a lot of growing pains everyone had to go through) to curb the explosion of hacks and ad-hoc controls that we had to create assets before.

But even smaller things can help artists to get more freedom, say even things like antialiasing, for example, might mean that geometry and other sources of discontinuity can be use more leniently, without transforming the frame in an undecipherable mess.

Not only there are diminishing returns for productivity improvements as for any other things, but the split point between features and productivity is often tricky. We definitely do not wait till everything is perfect before pushing more features out, the production monster wants to be fed.

And we shall never, ever discount the gigantic effects of familiarity, the other big scary monster. It is not worth sacrificing everything to it, but we should respect it. To use a technique well, to master it takes a long time. Changing things, even if entirely for the better, with no drawbacks whatsoever, still implies that we need to pay the (often huge) costs of loss of familiarity.

So? How do you decide? How do you measure? Then again. You do not. 

I hope he won’t mind me saying, this is one the paths to enlightenment forced on my by Christer, my former boss. How to put this. He has his tricks, not quite koans... So I learned that when he wasn’t persuaded about the opportunity of something, he would go and ask me to put things in more systematic ways, to try to narrow down that ever elusive “ROI”.

Then one time I think we were even arguing about how he could decide that a given initiative he was supporting would, in the end, be beneficial or the better course compared to another alternative. And he slipped and say that we don’t necessarily have to quantify this ROI thing! Of course, be both immediately caught that, even if we were over the phone he could almost sense my smile, but being the clever man he is, he managed to still be right despite the apparent idiosyncrasy...

The lesson is that we want to keep in mind that ROI thing. Not that we need to necessarily optimize for it and spend too much time chasing it. But we definitely need to keep it in mind, be always scared of the risk of doing irrelevant, or worse, damaging things. Keep ourselves accountable.

It’s the question, not the answer.

You might be excused to have thought that I put the question mark in the title, even if it isn’t in the form of a question, because of my poor English. But no, it was a clever thing you see, I actually went back halfway into writing this, and thought about it, and finally changed the punctuation. Only after deciding I would also write this, and feel so meta-clever. And again and ok, let’s stop this recursive loop...

And if I was really good at this, I could have jumped directly to the point and spared you all the blabbing, but I have time on my hands these days so. You’re welcome.

In the end, it is true that certain games should even chase diminishing returns because that’s what you do when you’re up enough. And it’s totally true that you can’t really quantify ROI anyway, so often times you should just do what you want. If someone really thinks something is important, and it’s not offensively bad, there should be space for that. In other words, because we know we are bad at ROI, we should realize that to chase it we should not chase it all the time (surprisingly, this is even a concept in optimization algorithms, by the way).

But! The questions are interesting.

How important shiny things are? Is there a point when state of the art techniques become so complex that they are unfriendly both to either content or programmers integrating/iterating, so much so that they will be used sub-optimally? And simpler solutions would have been actually better instead?

Think for example of something perfectly physically accurate, that can produce perfect images, but that behaves poorly when the inputs are not exact. This is not even such a wild scenario, you can see plenty of PBR games that would have been most likely best off without copy-and-pasting the GGX formulas, just because they now go nuclear with specular and aliasing...


Bloodborne might not be the pinnacle of RTR, but it is imaginative...
Even more interesting. Is there a point where the attention to graphical perfection actually produces worse graphics? Could it be, for example, that the efforts required to create worlds that are perfect, truly great quality-wise, comes in the way of creating worlds that have the variety, the artistry, the iteration and look that in the end are most often correlated to what people think of great graphics?

Again. In the end, we should remember that we serve the product. Not photorealism, per se, but the product. We do believe that photorealism is a great tool to create games, and I won’t question that. But still we have to remember that photorealism is not the goal, technology per se is useless. It’s the product, that we work for.

And if I had to guess, I'd say in most products today both end-user image quality and in most cases, performance, are bottlenecked by asset production, not the lack of whatever latest cool rendering trick. In particular by:
  • The sheer ability of authoring assets. Quantity / Variety.
  • The ability of iterating on assets. Quality.
  • The complexity of technical issues linked to art assets. Which in practice yield sub-optimal decision. Performance & Quality.
  • And the very fixed granularity of assets and their editing tools, the overall inability of performing large, sweeping art changes. The more an environment is "dressed" (authored) the more it hardens and resist change. Art direction. (and perhaps this also causes an over-reliance on some of the few tools that can do said sweeping changes, namely, post-effects)
N.B. All these are rendering problems! Implementation, research, even hardware innovation. Despite the title, the argument here is not that rendering research in videogames is a waste of time, or beyond diminishing returns. Au contraire! It's more vital than ever, in our times of enormous asset pressure. But we have to think hard about what is useful to the end product. 
To make a stupid example. A very smart system to automatically generate rendering meshes from artist data (LODs, materials, instances etc) is probably orders of magnitude more important than say, a post-effect...

01 March, 2019

What I learned at Activision.

Today is my last day at Activision.

Not quite the end of an era, but my six-year stint at Activision|Blizzard|King has been by far the longest I’ve so far worked for a given company, and I wanted to write something about it.

I don’t usually do things like this, but long gone are the times where I pretended this blog could stay anonymous. Also, I do think that we should really talk more about our experiences with teams and companies, be more open. I’ve never done that on this blog (albeit I’ve always been happy to chat about anything in person), so let me quickly fix that.



This has been my, to be honest quite lucky, video-game career trajectory:

- Milestone (Italy, racing games). The indie company. Here, we were a family. And as most families, often loud and dysfunctional, sometimes fighting, but in the end, for me, it was always fair and always fun. We were pioneers, not because we were necessarily doing state-of-the-art things, but because nobody around us knew better, we had to figure out everything on our own. Eventually, that became a limiting factor for my own growth, but it was great to start there.

- Electronic Arts (Canada, Fight Night team). My team at EAC was probably the best example of a well-organized game studio. Everything was neat, productions went smoothly, and we created some quite kick-ass graphics as well. EA, and probably even more so EA Sports is truly a game developer. And by that I mean that it takes part in the game development, we had shared guidelines, procedures, technologies, resources. Of course, each team could still have plenty of degrees of freedom to custom-fit the EA way to their specific reality, but you always felt part of a bigger ecosystem and had access to this gigantic wealth of knowledge and people across the globe. Fun times!

- Relic (Space Marines, mostly). Relic had much more of the “indie” feel of my Milestone days. Not quite the same, we were a big team in an even bigger studio, part of a publicly traded company, but we were also exploring uncharted territory (for us), with very smart people and lots of last-minute hacking. I’m proud of what we achieved, it was fun and I loved the spirit we had in our rendering/optimization corner of the office. We did perhaps chew a bit more than we could though, doing something unprecedented for the studio. THQ was also failing fast, which didn’t help.

- Capcom (Vancouver). This studio is now closed. It was an unlucky choice for me, our project was riddled with all kinds of problems, in all possible dimensions. Eventually, I was laid off, then the project was canceled, and a few years later, the studio went down. It was a very stressful time, day after day I was quite unhappy. Still, I have to say I met some excellent people and I’m glad that I now know them!

- And now, Activision.

It’s the people.

One of the company mottos is “it’s the people”. I didn’t particularly think that these “values”, that all companies put forward, were particularly received in Activision’s case, at least when it came to Central Technology I always felt we didn’t pay much attention. But for me, it was the people, first and foremost.

Activision is the place where I found the smartest people around me, by far. And I’ve worked with very smart people, in great companies, but nothing touches this.

Now, I have to say, I have also a unique, very biased vantage point. Being a technical director in central technology means to interface mostly with the most senior technical people and the studio leadership. I was not in production and not working with a single studio. A different ballgame.

But still, I can’t even make list here! Ok, ACME: Michael, Wade, Paul, Aki, I don’t think I can in a few words express the brilliance of these individuals. Paul’s technical abilities are unbeatable. He knows everything and can do anything. Wade is probably worse and any time I find the tiniest flaw in Michael’s life I have a sigh of relief (my girlfriend says he’s my work husband, even if I do have several man crushes to be honest, yes, there is a list). Aki started as an intern and was recently hired full time. I think he is already a better programmer than I am, and I definitely do not suffer from impostor’s syndrome.

Peter-Pike Sloan’s research team is the best R&D team I’ve ever seen, with people like Michal Iwanicki, Josiah Manson, Ari... But then again, I literally can’t make a list and I’m talking only about the rendering people, no actually, the rendering people I know best, even! My home team at Radical, which was a great studio in its own right, has been fantastic, Josh, Ryan, Tom, Peter, Andrew. CTN and that shadertoy genius of Paul Malin. And then the game teams. I mean, you can’t beat Drobot, can you? Jorge Jimenez! Dimitar Lazarov, Danny Chan, these are people who every project decide to completely change how Call of Duty renders things. Because why not, right? And of course, the people above me, Christer, whom I admired way before he landed at Activision but also Dave Cowling who hired me, and Andy our CTO who came to speak to us a year before he got hired, and even back then I thought he was an incredibly smart guy. And that's to speak just of the people in the Call of Duty orbit...

Truth is, if you know, you already know. If you don’t, it’s hard for me to tell you, so let’s just cut it short. Amazing people.

Not just smart.

Ok, so we have smart people, blah blah. What gives, are you just showing off or is there a point to this? There is actually.

And the lesson is not even “how to hire smart people” or that you should hire smart people. To be honest, I don’t think we have a way, even if Christer (and myself even) did put a lot of time and effort thinking about the process, you can only affect some multipliers I think. Mostly, teams seem to me to build up from the gravitational pull, around a given company culture. Once you have a given number of people that value given things, they tend to grab other people who also share the same core values, even if you want to have different points of view.

No, what Activision really taught me is that smart people don’t really matter, per se. Brilliance alone doesn’t mean that good things will happen to your products, actually, it can be dangerous because it can correlate with ego, especially if you haven’t reached the highest level of enlightenment yet.

What’s interesting about this particular bunch of smart people, is that they are also what actors call “grounded”. There is little bullshit going around. Tech is not made for tech’s sake. We don’t even have an engine, in a time where even if you really just have a game, one game, one codebase, you would call silly codenames each library and each little bit of tech, and maybe put some big splash screens before your main titles. Made with.. xyz.

This is actually a lesson that originates from the early days of Call of Duty at IW. From what I’m told, IW never named their engine, because any time you name something it becomes a bit more of a thing, and you start working for it. And they were a studio making games, the game is the thing. Nothing else.

This meshes just so well with the Activision way. So many people taught me so many lessons here, but the kind of fil rouge I found is this attention to what matters, when especially for us, rendering people, is so annoyingly easy to get distracted, quite literally, by the cool shiny thing that’s in the hype at the moment.

You might even call it ROI, I sometimes do even if perhaps I should not, it sounds “bad”. But you have to be aware of what matters, what are the trade-offs, what should you spend your time on. Which doesn’t mean you are a drone doing complex math in Excel, quite the contrary, there is definitely space for doing things because you want to, and you like to. 

Thing is, we cannot really compute the ROI of our tech stuff, especially for a thing that is so far from the product sales as rendering code is. But, we can be aware of these things. Even just thinking about them a bit makes a big difference.

Peter-Pike could be another example. I said I’ve never seen a better R&D team. Does it mean that there aren’t other teams that do research as good, or maybe even “better” sometimes? Of course, there are! The difference though is impact. Not only almost all our R&D work shipped on the given year’s title, but it’s also focused on what matters. Either help ship the title, technology that is needed to do what they want to do, or help the teams work better, technology to help productivity. Often both.

And this then again reflects also in the people. Yes, Peter-Pike is an accomplished academic, and his team does real research, meaning, things we don’t know how or even if we will solve. But he’s also incredibly pragmatic. He is hands-on coding all the times. More than I am. Better than I am!

Activision and Electronic Arts...

...couldn’t be more different, contrary to the popular belief that lumps all the three-four big publishers into the same AAA bucket.

And that is also what was interesting and life-changing. You see, I truly love them both. They are great in their own right and the results speak for themselves. They both have great people, I don’t even need to tell you that.

But they work so differently.

EA as I said, feels like a big game developer, a community. Sharing is one of the key values, finding the best ways of doing things, leveraging their size. It sounds very reasonable, and it is.

But Activision is almost the opposite. It feels like a publisher, who owns a number of internal studios. The studios, of course, are accountable to the publisher, but they are independent, that is key, core value. Central technology is not there to tell people how to do things, but as a publisher-side resource to help if possible. The teams are incredibly strong on their own, even in terms of R&D. And, at least from my vantage point, they get to call all their own shots. Which is again very reasonable, if you tell people they are accountable they should definitely have the freedom of choice too.

And yet again. Valuing independence versus valuing a shared community, the opposite viewpoints end up in practice creating results that are not that dissimilar. How comes?

If you’ve been doing this for a while, it won’t be even surprising. The key is that we don’t really know what’s best. Companies and teams even technologies and products organize around given values, a cultural environment that was created probably when there were almost a handful of employees. And once you have these the truth is you can most often structure everything else around in a way that makes sense, that works. 

We see this so often in code too. A handful of key decisions are made because of legacy or opinions. We’re going to be a deferred renderer. Well, now certain things are harder, others are easier. Certain stuff needs to happen early in the frame, other late, and you have certain bottlenecks and idle times, and you work from there, find smart ways to put work where the GPU is free, and shift techniques to remove bottlenecks and so on.

Which doesn't mean everything is ever perfect, mind you. We always have pain points and room for improvement, and different strategies yield different issues. To a degree, this is even lucky!  I don't make games, I help people and technology. The day Unity will solve all technical and organizational problems for game making is the day I'll be out of a job. At least in this industry...

Ok, then why?

If you love all these companies. Why leave? What are you not telling...
Well. I’m not too bright. And instead of keeping my great job, sometimes I venture into the unknown. But that’s a tale for another day...



26 February, 2019

C++, it’s not you. It’s me.

How I learned not to worry and tolerate C++

If you follow the twitter-verse (ok, and you happen to be in the same small circle of grumpy gamedevs that forms my bubble) you might have noticed lately a rise of rage and sarcasm against C++ and the direction it's taking.

I don't want to post all the relevant bits, but the crux of the issue, for the lucky among you who don't do social media, is the growing disconnect between people woking on big, complex, performance-sensitive and often monolithic and legacy-ridden codebases that we find in game development, and the ideas of "modernity” of the C++ standard community.

Our use-case is perhaps peculiar. We maintain large codebases, with large teams, but we never did great at modularization. This is our fault, and I’m not sure why it happened. Maybe it’s a combination of factors, including certain platforms and compilers not working well with dynamic libraries, or performance concerns. But most likely, it’s also the product of a creative environment where experimentation is a necessity, planning is inherently hard and architecture work is often simply neglected due to other production pressures.

The (AAA) gamedev use-case

Whatever are the historical reasons, the reality is that we live in an environment where most often than not:
  • We don’t care about the STL, not its performance improvements. We developed our own bespoke containers, both because we need very ad-hoc algorithms tuned to specific problem sizes, and because of design issues of the STL itself which made it impossible to use (e.g. its historical reluctance to play well with things like memory alignment).
  • We do care about all declinations of “performance”. Not only the final, all-optimizations enabled, code generation, but also performance in debug builds, and compiler performance. Iteration times.
  • We care about being able to reason about code. Simplicity, not counted as lines of code, but as the ability to clearly understand what a line of code does.   We often would prefer verbose, even in the eyes of some more arcane code, to unpredictable code, where to understand the relation between what we wrote and what happens requires a lot of context, of “global” information.

Given this scenario, it should be clear to see how most of the C++ additions since C++11 have gone in the “wrong” direction. They are typically not adding any expressive power for our use-cases, but instead, bring lots of complexity to an already almost impossible-to-master language. Complexity that also “trickles” down to tooling, compilers, compile times, debuggers and so on.

It’s hard to overstate how bad this chasm is growing, with some direction being truly infuriating, but let me just bring a concrete example. Take the modernization of STL containers, say r-value references, initializer lists and all the ecosystem around that. Clearly, a huge feature for C++, allowing even significant savings for certain uses of the STL. And what you would call “a zero cost abstraction” - if you don’t use it, you don’t see it. Everyone’s happy, right?

Quite the contrary! The prime example of something that is entirely useless for people who already know about the cost of constructors, temporaries and the like, designed their code thinking of how to layout and transform bits, instead of higher level concepts sketched in a UML diagram.
Useless but dangerous, as these concepts increased the complexity of the language exponentially, to the point that very few can really claim to understand all their nuances, and yet are still hard to entirely avoid in projects made by lots of people, and where you do not necessarily even have the control of all the code due to external libraries.

And when all this humongous effort is taken on one side, features that would truly help performance sensitive large “system” programming, are still completely ignored. C++ still doesn’t have a “restrict” keyword. Strict aliasing is incredibly troublesome and even getting worse with certain proposals. Threads and memory alignment were implemented first (and arguably better) in the good old C.
We still don’t have vector types, not to mention fancier features like the ability of “transposing” the memory layout of arrays of structures (into SOAs or AOSOA and so on). We don’t have anything to help tooling, or compile times, like standardized reflection, serialization or proper modules. And we keep adding metaprogramming (templates) features before tacking complexity and usability issues (e.g. concepts, that now are scheduled to be in C++20).

What’s C++ anyway?

I think part of the disconnect, and even anger in the community, is due to a misinterpretation of what C++ is and always has been.
There is this myth that C++ is a low-level, high-performance/system programming language. It’s false, and it has always been false.

Clearly, a language that didn’t bother until recently to implement threads, is not a language for high-performance anything. There is nothing in C++ that concerns with system programming either, that is the C part of it, really. And so on and so forth, the more you look the more evident that truth is. Even the ancient Fortran can say it’s more concerned with performance than C++. C++ is not ISPC, nor Cuda. It doesn’t come with algorithms and data structure for low-latency, constrained memory use-cases, neither does care about large-scale data cases. Even python can be seen as a better language for high-performance computation, due to its ability to quickly implement embedded-DSLs. Nowadays, a lot of high-performance code leverages specialized compilers that use are embedded in relatively high-level languages via reflection. None of that is possible in C++.

C++ is a “zero cost abstraction” language. That part is true. But “zero cost” is not about performance. In fact, the guarantee that something won’t cost you anything when it’s not used doesn’t really mean that it will be fast when you do use it! What it gives, instead, is peace-of-mind. Bjarne said, do you like C? And its ecosystem? Great! Use this one, I guarantee I won’t screw with C, and if you like anything else I added, you get to use it.

And don’t get me wrong. This is genius. And a fundamental lesson that so many other languages still today don’t understand. Making a nice language is the easy part. If you’re even just a tourist of computer languages, have been exposed with concepts from veterans like Lisp and ML and so on, it’s not that hard to come up with a perfectly pleasant little language. And in fact, we have so, so many of them today. The hard part is selling such language! And if you don’t have a community with a strong need that nobody but you serve, that means you have to persuade people to hop over from whatever what they’re currently using. That is almost impossible.

Bjarne understood that the ecosystem is what matters most and C++ succeeded by being a drop-in replacement for C. Unfortunately now C++ is so complex, that creating a new language that seamlessly can work and replace it is a tall order, but it would also be an incredibly powerful tool for adoption.

Why things changed and what can we do about it?

Bjarne got marketing right, he probably understood people more than languages, which is not an insult, to the contrary! People are all that matters.

But if this is the premise, it wouldn’t even be surprising that C++ is drifting away from systems programming, from what C programmers care about. Nowadays, these use-cases are a minority. 
It is not unreasonable, from that point of view, to now try to appease to the cool kids writing web stuff. People who might be using python, or java, or go and so on. Programmers who are accustomed to working fast, gluing together frameworks and libraries more than they write any bespoke algorithm and data structure. In fact, when you think about it, adding OO to C was just what was trendiest at the time. It was a hype and marketing based decision, not really a smart one, as we now maybe see more clearly as the OO hype dissipates.

But I don’t even think it’s necessarily a conscious design decision. This language is huge today. Its community its huge, and it decided to go the way of the design by committee. Should you never do that? In my circles, the answer is definitely no, design by committee is the death of technology. But let’s present a more positive way of looking at it.

We know that democracy has a cost, a huge cost. It’s definitely not the most efficient way of governing, nor it produces the most brilliant decisions. It can be in fact, incredibly dumb. This is not news, even in the Roman Republic there were provisions for senators to elect a dictator for a temporary period when strong leadership was deemed necessary. Of course, the risk is for a dictator to become a tyrant, as Romans learned. So, democracies trade efficiency for risk aversion, variance for mean.

And that is what C++ is today. It’s an old language, even if it wants to play cool, it fundamentally is in maintenance mode, listening to a lot of people with a lot of ideas and going in the direction of the majority, not of strong design decisions. Sometimes people argue that the solution would be to have more representation of certain use cases in the committee. 
Perhaps a bit would help, but for how much I do like politics, I really don’t care about language ones. If I could vote on something, I would vote to remove people from the committee, make it a smaller one, not a noisier and bigger one, even if the noise was to argue “in my favor”. I simply don’t think you can have a lot of compromise in technology without ending up with something mediocre.

So? So we live with C++. Not because we like it, but because we need the ecosystem. The compilers, the IDEs, the language extensions, the low-level intrinsics, the legacy code. We restrict our usage to mostly C, and grab whatever rare new feature happens to integrate decently in our workflows. Maybe one day some new language will come, we already use bits and pieces of other ones when needed.

And maybe someday someone will learn the real lesson Bjarne had to teach, and really kill it! There is actually no reason for example that a language could not compile its own syntax in its files, but also allow to include C++ headers. Yes, you’d have to suffer and pay the pain of integrating a C++ compiler in whatever you come up with, and yes, your language would most probably just be something that expands to C++. Exactly how C++ started on top of C. But it’s unsexy, boring work, so most people will write yet another C-ish thing with a bit of ML in it on top of LLVM and call it a day...

Addendum: "Alex" asked an interesting question in the comments - why don't we make our own. I tried to answer that as well if you have a look below! 👇