Search this blog

Loading...

Saturday, August 30, 2014

I Support Anita


One of the first things you learn when you start making games is to ignore most of the internet. Gamers can undoubtedly be wonderful, but as in all things internet, a vocal minority of idiots can overtake most spaces of discourse. Normally, that doesn't matter, these people are irrelevant and worthless even if they might think they have any power in their jerking circles. But if after words other forms of harassment are used, things change.

I'm not a game designer, I play games, I like games, but my work is about realtime rendering, most often the fact it's used by a game is incidental. So I really didn't want to write this, I didn't think there is anything I can add to what has already been written. And Anita herself does an excellent job at defending her work. Also I think the audience of this blog is the right target for this discussion.

Still, we've passed a point and I feel everybody in this industry should be aware of what's happening and state their opinion. I needed to make a "public" stance.

Recap: Anita Sarkeesian is a media critic. She began a successful kickstarter campaign to produce reviews of gender tropes in videogames. She has been subject to intolerable, criminal harassment. People who spoke in her support have been harassed, websites have been hacked... Just google her name to see the evidence.

My personal stance:
  • I support the work of Anita Sarkeesian. As I would of anybody speaking intelligently about anything, even if I were in disagreement.
  • I agree with the message in the Tropes Vs Women series. I find it to be extremely interesting, agreeable and instrumental to raise awareness of an in many cases not well understood phenomenon. 
    • If I have any opinion on her work, is that I suspect in most cases hurtful stereotypes don't come from malice or laziness (neither of which she mentions as possible causes by the way), but from the fact that games are mostly made by people like me, male, born in the eighties, accustomed to a given culture. 
    • And even if we can logically see the issues we still have in gender depictions, we often lack the emotional connection and ability to notice their prevalence. We need all the critiques we can get.
  • I encourage everybody to take a stance, especially mainstream gaming websites and gaming companies (really, how can you resist being included here), but even smaller blog such as this one. 
    • It's time to marginalize harassment and ban such idiots from the gaming community. To tell them that it's not socially acceptable, that most people don't share their views. 
    • Right now for most of the video attacks (I've found no intelligent rebuttal yet) to Tropes Vs Women are "liked" on youtube. Reasonable people don't speak up, and that's even understandable, nobody should argue with idiots, they are usually better left ignored. But this got out of hands.
  • I'm not really up for a debate. I understand that there can be an debate on the merit of her ideas, there can be debate about her methods even, and I'd love to read anything intelligent about it. 
    • We are way past a discussion on whether she is right or wrong. I personally think she is substantially right, but even if she was wrong I think we should all still fight for her to be able to do her job without such vile attacks. When these things happen, in such an extent, I think it's time for the industry to be vocal, for people to stop and just say no. If you think Anita's work (and her as a person) doesn't deserve at least that respect, I'd invite you to just stop following me, seriously.
Links:

Saturday, August 23, 2014

Notes on #Minimalism in code

A few weeks ago I stumbled upon a nice talk at the public library near where I live, about minimalism, buy a couple of friends, Joshua Fields Millburn and Ryan Nicodemus, who call themselves "the minimalists".

Worth a watch, especially if you grew up in a country which absorbed the post-Protestant ideology of capitalistic wealth.

What it's interesting is that I immediately found this notion of parsimony to be directly applicable to programming. I guess, it would apply to most arts, really. I "live tweeted" that "minimalism in life is probably showing the same level of maturity of minimalism in coding" and started sketching some notes for a later post. 
On stage I hear "the two most dangerous words in the English language are: one day". This is going to be good, I wonder if these guys -were- programmers.

- Return of Investment

In real life, clutter comes with a price. Now, that might very well not be a limiting factor, I think that the amount of crap tends to depend on our self-control, while money just dictates how expensive the crap we buy gets, but still there is a price. In the virtual world of code on the other hand clutter has similar negative consequences on people's mental health, but on your finances it might even turn out to be profitable.

We all laugh at the tales of coders creating "job security" via complexity, but I don't really think it's something that happens often in a conscious way. No, what I believe is much more common is that worthless complexity is encouraged by the way certain companies and teams work.

Let's make an example, a logging system. Programmer A writes three functions, open, printf, close, it works great, it's fast, maybe he even shares the knowledge on what he learned writing it. Programmer B creates a logging framework library templated factory whatever pattern piece of crap, badly documented on top of that.

What happens? In a good company, B is fired... but it's not that easy. Because there are maybe two-three people that really know that B's solution doesn't solve anything, many others won't really question it, will just look at this overcomplicated mess and think that must be cool, complexity must exist for a reason. And it's a framework! 
So now what could have been done in three lines takes several thousands, and who wants to rewrite several thousand lines? So B's framework begins to spread, it's shared by many projects, and B ends up having a big "sphere of influence" and actually getting a promotion. Stuff for creepypasta, I know. Don't do Boost:: at night (nor ever).

In real life things are even more complex. Take for example sharing code. Sharing code is expensive (sharing knowledge though is not, always do that) but it can get beneficial of course. Where is the tipping point? "One day" is quite certainly the wrong answer, premature generalization is a worse sin than premature optimization, these days.
There is always a price to pay to generality, even when it's "reasonable". So at every step there is an operation research problem to solve, over quantities that are fuzzy to say the least. 

- Needed complexity

In their presentation, Joshua and Ryan take some time to point out that they are not doing an exercise in living with the least amount of stuff as possible, they aren't trying to impose some new-age set of rules. The point is to bring attention to making "deliberate and meaningful" choices, which again I saw as being quite relevant to programming.

Simplicity is not about removing all high-level constructs. Nor it is about being as high-level, and terse, as possible. I'd say it's very hard to encapsulate it in a single metric to optimize, but I like the idea of being conscious of what we do. If we grab a concept, or a construct, do we know really why, or are we just following the latest fad from a blog for cool programmers?
Not that we shouldn't be open minded and experimental. The opposite! We should strive for an understanding of what to use when, be aware, know more.
It's cool to have this wide arsenal of tools, and as in all arts, we are creative and with experience we can find meaningful ways to use quite literally any tool. But there is a huge difference between being able to expertly choose between a hundred brushes for a painting, and thinking that if we do have and use a hundred brushes we will get a nice painting.

Complexity is not always not needed, but surely should be feared more.

Do I use a class? What are the downsides? More stuff in a header, more coupling? What it's getting me, that I can't achieve without. Do I need that template? What was a singleton for again, why would it be better than a static? Should I make that operation implicit in the type? Will the less typing needed to use a thing offset the more code needed in the implementation? Will it obscure details that should not be obscured? And so on...

- Closing remarks

Even if I said there isn't a single metric to optimize, there are a few things that I like to keep in mind. 

One is the amount of code compared to the useful computation the code does. The purpose of all code is algorithmic, we transform data. How much of a given system is dedicated to actually transforming the data? How much is infrastructural? 
This isn't about terseness. I can have a hundred lines of assembly that sort an array or one line of python, they lie on opposite ends of terseness versus low-level control line, but in both cases they are doing useful computation. And on a similar note, how much heat am I wasting doing things that are not moving the bits I need for the final result I seek?
Note that if interpreted strictly, most high-level programming techniques would "worsen" these metrics as programming languages introduce more complex constructions to ease the programmers' life, not to please the hardware. Which is fine if we are aware of the tradeoffs and we make conscious decisions.

The second is the amount of code and complexity that I save versus the amount of code and complexity I have to write to get the saving. Is it really good to have something that it's trivial to use "on the surface", but impossible to understand when it comes to the inner workings? 
Sometimes the answer is totally a yes, e.g. I don't really care too much about the minute details of how Mathematica interprets my code when I do exploratory programming there. Even if it ends up with me kicking around the code a few times when I don't understand why things don't work. But I might not want that magic to happen in my game code.
Moreover, most of the times we are not even on the Pareto front, we're not making choices that maximize the utility in one or the other direction. And most of the times, such choices are highly non-linear, where we can just accept a bit more verbosity at the caller-side for a ton less headache on the implementation-side.

Lastly, the minimalists also talk about the "packing party": pack everything you have, then unpack only the things that you need as you need them, over a few weeks. Throw away the stuff that stays packed. The code equivalent coverage testing: push your game through an extensive profile step, then mark all the lines that were never reached. 
Throwing away stuff is more than fine, it's beneficial. Keep the experience and knowledge. And if needed, we always have p4 anyways.

Somewhat related, and very recommended: http://www.infoq.com/presentations/Simple-Made-Easy (note though that I do think that "easy" is also a good metric, especially for big projects where you might have programmer turnover, many junior programmers and so on)

Saturday, June 28, 2014

Stuff that every programmer should know: Data Visualization

If you're a programmer and you don't have visualization as one of your main tools in your belt, then good news, you just found how to easily improve your skill set. Really it should be taught in any programming course.

Note: This post won't get you from zero to visualization expert, but hopefully it can pique your curiosity and will provide plenty of references for further study.

Visualizing data has two main advantages compared to looking at the same data in a tabular form. 

The first is that we can pack more data in a graph that we can get by looking at numbers on screen, even more if we make our visualizations interactive, allowing explorations inside a data set. Our visual bandwidth is massive!

This is useful also because it means we can avoid (or rely less on) summarization techniques (statistics) that are always by their nature "lossy" and can easily hide important details (the Anscombe's quartet is the usual example).

Anscombe's quartet, from wikipedia. Data has the same statistics, but clearly different when visualized

The second advantage, which is even more important, is that we can reason about the data much better in a visual form. 

0.2, 0.74, 0.99, 0.87, 0.42, -0.2, -0.74, -0.99, -0.87, -0.42, 0.2

What's that? How long do you have to think to recognize a sine in numbers? You might start reasoning about the simmetries, 0.2, -0.2, 0.74, -0.74, then the slope and so on, if you're very bright. But how long do you think it would take to recognize the sine plotting that data on a graph?

It's a difference of orders of magnitude. Like in a B-movie scifi, you've been using only 10% of your brain (not really), imagine if we could access 100%, interesting things begin to happen.

I think most of us do know that visualization is powerful, because we can appreciate it when we work with it, for example in a live profiler.
Yet I've rarely seen people dumping data from programs into graphing software and I've rarely seen programmers that actually know the science of data visualization.

Visualizing program behaviour is even more important in the context of rendering engineers or any code that doesn't just either fail hard or work right.
We can easily implement algorithms that are wrong but doesn't produce a completely broken output. It might be just slower (i.e. to converge) than it needs to be, or more noisy, or just not quite "right" and cause our artists to try to adjust for our mistakes by authoring fixes in the art (this happens -all- the time) and so on.
And there are even situations where the output is completely broken, but it's just not obvious from looking at a tabular output, a great example for this would be in the structure of LCG random numbers.

This random number generator doesn't look good, but you won't tell from a table of its numbers...


- Good visualizations

The main objective of visualization is to be meaningful. That means choosing the right data to study a problem, and displaying it in the right projection (graph, scale, axes...).

The right data is the one that is interesting, it shows the features of our problem. What questions are we answering (purpose)? What data we need to display?

The right projection is the one that shows such features in an unbiased, perceptually linear way, and that makes different dimensions comparable and possibly orthogonal. How do we reveal the knowledge that data is hiding? Is it x or 1/x? Log(x)? Should we study the ratio between quantities or absolute difference and so on.

Information about both data and scale comes at first from domain expertise. A light (or sound) intensity probably should go on a logarithmic scale, maybe a dot product should be displayed as the angle between its vectors, many quantities have a physical interpretation and a perceptual interpretation or a geometrical one and so on.

But even more interestingly, information about data can come from the data itself, by exploration. In an interactive environment it's easy to just dump a lot of data to observe, notice certain patterns and refine the graphs and data acquisition to "zoom in" particular aspects. Interactivity is the key (as -always- in programming).


- Tools of the trade

When you delve a bit into visualization you'll find that there are two fairly distinct camps.

One is visualization of categorical data, often discrete, with the main goal of finding clusters and relationships. 
This is quite popular today because it can drive business analytics, operate on big data and in general make money (or pretty websites). Scatterplot matrices, parallel coordinate plots (very popular), Glyph plots (star plots) are some of the main tools.

Scatterplot, nifty to understand what dimensions are interesting in a many-dimensional dataset

The other camp is about visualization of continuos data, often in the context of scientific visualization, where we are interested in representing our quantities without distortion, in a way that the are perceptually linear.

This usually employs mostly position as a visual cue, thus 2d or 3d line/surface or point plots.
These become harder with the increase of dimensionality of our data as it's hard to go beyond three dimensions. Color intensity and "widgets" could be used to add a couple more dimensions to points in a 3d space but it's often easier to add dimensions by interactivity (i.e. slicing through the dataset by intersecting or projecting on a plane) instead.

CAVE, soon to be replaced by oculus rift
Both kinds of visualizations have applications to programming. For deterministic processes, like the output or evolution in time of algorithms and functions, we want to monitor some data and represent it in an objective, undistorted manner. We know what the data means and how it should work, and we want to check that everything goes according to what we think it should.  
But there are also times were we don't care about exact values but we seek for insight into processes of which we don't have exact mental models. This applies to all non-deterministic issues, networking, threading and so on, but also to many things that are deterministic in nature but have a complex behaviour, like memory hierarchy accesses and cache misses.


- Learn about perception caveats

Whatever your visualization is though, the first thing to be aware of is visual perception: not all visual cues are useful for quantitative analysis. 

Perceptual biases are a big problem, because as they are perceptual, we tend not to see them, just subconsciously we are drawn to some data points more than others when we should not.


Metacritic homepage has horrid bar graphs.
As numbers are bright and below a variable-size image,  games with longer images seem to have lower scores...

Beware  of color, one of the most abused, misunderstood tool for quantitative data. Color (hue) is extremely hard to get right, it's very subjective and it doesn't express well quantities nor relationships (what color is less than another), yet it's used everywhere.
Intensity and saturation are not great either, again very commonly used but often inferior to other hints like point size or stroke width.


From complexdiagrams


- Visualization of programs

Programs are these incredibly complicated projects we manage to carry forward, but if that's not challenging enough we really love working with them in the most complicated ways possible. 

So of course visualization is really limited. The only "mainstream" usage you will have probably encountered is in the form of bad graphs of data from static analysis. Dependences, modules, relationships and so on.

A dependency matrix in NDepend

Certainly if you have to see your program execution itself it -has- to be text. Watch windows, memory views with hex dumps and so on. Visual Studio, which is probably the best debugger IDE we have, is not visual at all nor allows for easy development of visualizations (it's even hard to grab data from memory in it).

We're programmers so it's not a huge deal to dump data to a file or peek memory [... my article], then we can visualize the output of our code with tools that are made for data. 
But an even more important tool is to use visualization directly of the behaviour of code, in runtime. This is really a form of tracing which most often is limited to what's known as "printf" debugging.

Tracing is immensely powerful as it tells us at a high level what our code is doing, as opposed to the detailed inspection of how the code is running that we can get from stepping in a debugger.
Unfortunately there is today basically no tool for graphical representation of program state in time, so you'll have to roll your own. Working on your own sourcecode it's easy enough to put some instrumentation to export data to a live graph and in my own experiments I don't use any library for this, just write the simplest possible ad-hoc code to suck the data out.

Ideally though it would be lovely to be able to instrument compiled code, it's definitely possible but much more of an hassle without the support of a debugger. Another alternative that sometimes I adopt is to just have an external application peek at regular interval into my target's process memory
It's simple enough but it captures data at a very low frequency so it's not always applicable, I use it most of the times not on programs running in realtime but as an live memory visualization while stepping through in a debugger.

Apple's recent Swift language seems a step into the right direction, and looks like it pulled some ideas from Bret Victor and Light Table.
Microsoft had a timid plugin for VisualStudio that did some very basic plotting that doesn't seem to be actively updated and another one for in-memory images, but what would be really needed is the ability to export data easily and in realtime as good visualizations are usually to be made ad-hoc for a specific problem.

Cybertune/Tsunami

If you want to delve deeper into program visualization there is a fair bit written about it by the academia, with also a few interesting conferences, but what's even more interesting to me is seeing it applied to one of the hardest coding problems: reverse engineering. 
It should perhaps not be surprising as reversers and hackers are very smart people, so it should be natural for them to use the best tools in their job.
It's quite amazing seeing how much one can understand with very little other information by just looking at visual fingerprints, data entropy and code execution patterns.
And again visualization is a process of exploration, it can highlight some patterns and anomalies to then delve in further with more visualizations or by using other tools.

Data entropy of an executable, graphed in hilbert order, shows signing keys locations.


- Bonus links

Visualization is a huge topic and it would be silly to try to teach everything it's needed in a post, but I wanted to give some pointers hoping to get some programmers interested. If you are, here some more links for further study. 
Note that most of what you'll find on the topic nowadays is either infovis and data-driven journalism (explaining phenomenons via understandable, pretty graphics) or big-data analytics. 
These are very interesting and I have included a few good examples below, but they are not usually what we seek, as domain experts we don't need to focus on aesthetics and communication, but on unbiased, clear quantitative data visualization. Be mindful of the difference.

- Addendum: a random sampling of stuff I do for work
All made either in Mathematica or Processing and they are all interactive, realtime.
Shader code performance metrics and deltas across versions 
Debugging an offline backer (raytracer) by exporting float data and visualizing as point clouds
Approximation versus ground truth of BRDF normalization
Approximation versus ground truth of area lights
BRDF projection on planes (reasoning about environment lighting, card lighting)

Wednesday, June 25, 2014

Oh envmap lighting, how do we get you wrong? Let me count the ways...

Environment map lighting via prefiltered cubemaps is very popular in realtime CG.

The basics are well known:
  1. Generate a cubemap of your environment radiance (a probe, even offline or in realtime).
  2. Blur it with a cosine hemisphere kernel for diffuse lighting (irradiance) and with a number of phong lobes of varying exponent for specular. The various convolutions for phong are stored in the mip chain of the cubemap, with rougher exponents placed in the coarser mips.
  3. At runtime we fetch the diffuse cube using the surface normal and the specular cube using the reflection vector, forcing the latter fetch to happen at a mip corresponding to the material roughness.
Many engines stop at that, but a few extensions emerged (somewhat) recently:
Especially the last extension allowed a huge leap in quality and applicability, it's so nifty it's worth explaining a second.

The problem with Cook-Torrance BRDFs is that they depend from three functions: a distribution function that depends on N.H, a shadowing function that depends on N.H, N.L and N.V and the Fresnel function that depends on N.V.

While we know we can somehow solve functions that depend on N.H by fetching a prefiltered cube in the reflection direction (not really the same, but the same different that there is between the Phong and Blinn specular models), if something depends on N.V it would add another dimension to the preintegrated solution (requiring an array of cubemaps) and we completely wouldn't know what to do with N.L as we don't have a single light vector in environment lighting.

The cleverness of the solution that was found can be explained by observing the BRDF and how its shape changes when manipulating the Fresnel and shadowing components.
You should notice that the BRDF shape, thus the filtering kernel on the environment map, is mostly determined by the distribution function, that we know how to tackle. The other two components don't change much of the shape but scale it and "shift" it away from the H vector. 

So we can imagine an approximation that integrates the distribution function with a preconvolved cubemap mip pyramid, and the other components are somehow relegated into a scaling component by preintegrating them against an all-white cubemap, ignoring specifically how the lighting is distributed. 
And this is the main extension we employ today, we correct the cubemap that has been preintegrated only with the distribution lobe with a (very clever) biasing factor.

All good, and works, but now, is all this -right-? Obviously not! I won't offer (just yet) solutions here but can you count the ways we're wrong?
  1. First and foremost the reflection vector is not the half-vector, obviously.
    • The preconvolved BRDF expresses a radially symmetric lobe around the reflection vector, but an half-vector BRDF is not radially symmetric at grazing angles (when H!=N), it becomes stretched.
    • It's also different from the its reflection-vector based one when R=H=N but there it can be adjusted with a simple constant roughness modification (just remember to do it!).
  2. As we said, Cook-Torrance is not based only on an half-vector lobe. 
    • We have a solution that works well but it's based only on a bias, and while that accounts for the biggest difference between using only the distribution and using the full CT formulation, it's not the only difference.
    • Fresnel and shadowing also "push" the BRDF lobe so it doesn't reach its peak value on the reflection direction.
  3. If we bake lighting from points close enough that perspective matters, then discarding position dependence is wrong. 
    • It's true that perceptually is hard for us to judge where lighting comes from when we see a specular highlight (good!) but for reflections of nearby objects the error can be easy to spot. 
    • We can employ warping as we mentioned, but then the preconvolution is warped as well.
    • If for example we warp the cubemap by considering it representing light from a box placed in the scene, what we should do is to trace the BRDF against the box and see how it projects onto it. That projection won't be a radially symmetric filtering kernel in most cases.
    • In the "box" localized environment map scenario the problem is closely related to texture card area lights.
  4. We disregard occlusions.
    • Any form of shadowing of the preconvolved enviroment lighting that just scales it down is wrong as occlusion should happen before prefiltering.
    • Still -DO- shadow environment map lighting somehow. A good way is to use screen-space (or voxel-traced) computed occlusion by casting a cone emanating from the reflection vector, even if that's done without considering roughness for the cone size, or somehow precomputing and baking some form of directional occlusion information.
    • Really this is still due to the fact that we use the envmap information at a point that is not the one from which it was baked.
    • Another good alternative to try to fix this issue is renormalization as shown by Call of Duty.
  5. We disregard surface normal variance.
    • Forcing a given miplevel (texCubeLod) is needed as mips in our case represent different lobes at different roughnesses, but that means we don't antialias that texture considering how normals change inside the footprint of a pixel (note: some HW gets that wrong even with regular texCube fetches)
    • The solution here is "simple" as it's related to the specular antialiasing we do by pushing normal variance into specular roughness.
    • But that line of thought, no matter the details, is also provably wrong (still -do- that). The problem is closesly related to the "roughness modification" solution for spherical area lights and it suffers from the same issue, the proper integral of the BRDF with a normal cone is flatter than what we get at any roughness on the original BRDF.
    • Also, the footprint of the normals won't be a cone with a circular base, and even what we get with the finite difference ddx/ddy approximation would be elliptical.
  6. Bonus: compression issues for cubemaps and dx9 hardware.
    • Older hardware couldn't properly do bilinear filtering across cubemap edges, thus leading to visibile artifacts that some corrected by making sure the edge texels were the same across faces.
    • What most don't consider though is that if we use a block-compression format on the cubemap (DXT, BCn and so on) there will be discontinuities between blocks which will make the edge texels different again. Compressors in these cases should be modified so the edge blocks share the same reference colors.
    • Adding borders is better.
I'll close with some links that might inspire further thinking:
#phyiscallybasedrenderingproblems #naturesucks

Monday, June 16, 2014

Bonus round: languages, metaprogramming and terseness

People always surprise me. I didn't expect a lot out of my latest post but instead it spawned many very interesting comments and discussion over twitter and a few Reddit threads.

I didn't want to talk about languages really, I did so already a lot in the past, I wanted to make a few silly examples to the point of why we code in C++ and what could be needed to move us out of it (and only us, gamedevs, system, low-level, AAA console guys, not in general the programming world which often already ditched C++ and sometimes is even perfectly ok with OO) but instead I got people being really passionate about languages and touting the merits of D, Rust, Julia (or even Nimrod and Vala).

Also to my surprise I didn't get any C++ related flame, nobody trying really to prove that C++ was the best possible given the constraints or arguing the virtues of OO and so on, it really seems most agree today and we're actually ready and waiting for the switch!

Anyhow, I wanted to write an addendum to the post because it's related to the humanistic POV I tried to make, talking about language design.

- Beauty and terseness

Some people started talking about meta-programming and in general expressiveness and terseness. I wanted to offer a perspective on how I see some language concepts in terms of what I do.

In theory, beauty in programming is simplicity and expressiveness, and terseness. To a degree programming itself can be seen as data compression, so the more powerful our statements are, the more they express, the more they compress, the better.
But obviously this analogy goes only so far, as we wouldn't consider programming in a LZ-compressed code representation to be beautiful, even if it would be truly be more terse (and it would be a form of meta-programming, even).

That's obviously because programming languages are not only made to be expressive, but also understandable by the meat bags that type them in, so there are trade-offs. And I think we have to be very cautious with them.

Take meta-programming for example, it allows truly beautiful constructions, the ability of extending your language semantics and adapting it to your domain, all the way to embedded DSLs and the infamous homoiconicity dear to lispers. 
But as a side effect, the most you dabble in that, the more your language statements lose meaning in isolation (to the lisp extreme where there is almost no syntax to carry any meaning), and that's not great.

There might be some contexts where a team can accept to build upon a particular framework of interpretation of statements, and they get trained in it and know that in this context a given statement has a given meaning.
To a degree we all need to get used to a codebase before really understanding what's going on, but it is a very hard trade the one that adds burden to the mental model of what things mean.

For gamedev in particular is quite important not only that A = B/C means A = B/C, but also that it is executed in a fixed way. Perhaps certain times we overemphasize the need of control (and for example often have to debate to persuade people that GC isn't evil, lack of control over heap allocation is) because of a given cultural background, but undeniably it does exist.

[Small rant] Certainly I would not be happy if B/C meant something silly like concatenating strings. I could possibly even tolerate "+" for that because it's so common it is a natural semantic, maybe stretching it even "|" or "&". But "/" would be completely fucked up. Unless you're a Boost programmer and are really furiously masturbating on the thought of how pretty is to use "/" to concatenate paths because directory dividers are slashes and you feel so damn smart.

That's why most teams won't accept metaprogramming in general and will allow C++ templates only as generics, for collections. And will allow operator overloading only for numeric types and basic operations.
...And why don't like references if they are non-const (the argument is that a mutable reference parameter to a function can change the value of a variable and that change is not syntactically highlighted at the call-site, a better option would be to have an "out" annotation like C# or HLSL). ...And don't like anything that adds complexity to the resolution rules of C++ calls, or exceptions, or the auto-generated operators of C++ classes, and should thus stay away also from R-value references.

- Meatbags

For humans certain constraints are good, they are actually liberating. Knowing exactly what things will do allows me to reason about them more easily than languages that require lots of context and mental models to interpret statements. That's why going back to C is often as fun as discovering python for the first time.

Now of course YMMV, and there are situations where truly we can tolerate more magic. I love dabbling with Mathematica, even if most of the times I don't exactly know how the interpreter will chain rules to compute something, it works and even when it doesn't I can iterate quickly and try workarounds until it does work.
Sometimes I really need to know more and there is when you open a can of worms, but for that kind of work it's fine, it's prototyping, it's exploratory programming, it's fun and empowering and productive. And I'm fine not knowing and just kicking stuff, clicking buttons until things work in these contexts, not everybody has to know how things work all the way to the metal and definitely not all the time, there are places where we should just take a step back...
But I wouldn't write a AAA console game engine that way. I need to understand, be able to have a precise simple mental model that is "standard" and understood by everybody on a project, even new hires. C++ is already too complex for this to happen and that's one of the big reasons we "subset" it into a manageable variant enforced by standard and linters. 

Abstractions are powerful but we should be aware of their mental cost, and maybe counter it with simple implementations and great tools (e.g. not make templates impossible to debug like in C++ are...) so it doesn't feel like you're digging in a compiler, when they fail.

Not that all language refinements impose a burden, so there can't be a more expressive than C language that is still similarly easy to understand, but many language decisions come with a tradeoff, and I find ones that loosen the relationship between syntax and semantics particularly taxing.

As the infamous Gang of Four wrote (and I feel dirty citing a book that I so adverse): "highly parameterized software is harder to understand and build than more static software".

That's why I advocate for increased productivity to seek interactivity and zero-iteration times, live-coding and live-inspection over most other language features these days.

And before going to metaprogramming I'd advocate to seek solutions (if needed) in better type systems. C++ functors are just lambdas and first-class functions done wrong, parametric polymorphism should be bounded, auto_ptr is a way to express linear types and so on... Bringing functionality into the language is often better than having a generic language that ca be extended in custom ways. Better for the meatbags and for the machine (tools, compilers and so on)

That said, every tool is just that, a tool. Even when I diss OOP it's not that per se having OO is evil, really a tool is a tool, the evil starts when people reason in certain ways and code in certain ways. 
Sometimes also the implementation of a given tool is particularly, objectively broken. If you only know metaprogramming from C++ templates, that were just an ignorant attempt at generics went very wrong (and that is still today not patched, concepts were rejected and I don't trust anyways them to be implemented in a sane way), then you might be overly pessimistic.

But with great power comes often great complexity to really know what's going on, and sane constraints are often an undervalued tool, we often assume that less constraints will be a productivity win and that's not true at all.

- Extra marks: a concrete example

When I saw Boost::Geometry I was so enraged I wanted to blog about it, but it's really so hyperbolically idiotic that I decided to take the high road and ignore it - Non ragioniam di lor, ma guarda e passa.

As an example I'll post here a much more reasonable use case someone showed me in a discussion, I have actually no qualms with this code, it's not even metaprogramming (just parametric types and unorthodox operator overloading) and could be appropriate in certain contexts, so it's useful to show some tradeoffs.

va << 10, 10, 255, 0, 0, 255;

Can you guess what's that? I can't, so I would go and look at the declaration of va for guidance.

vertex_array < attrib < GLfloat, 2 >, attrib < GLubyte, 4 > > va;

Ok so now it's clear, right? The code is taking numbers and packing into an interleaved buffer for rendering. I can also imagine how it's implemented, but not with certainty, I'd have to check. The full code snippet was:

vertex_array < attrib < GLfloat, 2 >, attrib < GLubyte, 4 > > va;

va << 10, 10, 255, 0, 0, 255; // add vertex with attributes (10, 10) and (255, 0, 0, 255)
// ...
va.draw(GL_TRIANGLE_STRIP);

This is quite tricky to implement in a simpler C-style C++ also because it hits certain deficiencies of C, the unsafe variadic functions and the lack of array literals. 
Let's try, one possibility is:

VertexType vtype = {GLfloat, 2, GLubyte, 3};
void *va = MakeVertexArray(vtype);

AddVertexData(&va, vtype, 10, 10, 255, 0, 0, 255, END);
Draw(va, vtype);

But that's still quite a bit magical at the call-site, not really any better. Can we improve? What about:

VertexType vtype = {GLfloat, 2, GLubyte, 3};
void *va = MakeVertexArray(vtype);

va = AddFloatVertexData(va, 10, 10);
va = AddByteVertexData(va, 0, 0, 255);
Draw(va);

or better (as chances are that you want to pass vertex data as arrays here and there):

VertexType vtype = {GLfloat, 2, GLubyte, 3};
void *va = MakeVertexArray(vtype);

float vertexPos[] = {10, 10};
byte vertexColor[] = {0, 0, 255};
va = AddVertexData(va, vertexPos, array_size(vertexPos));
va = AddVertexData(va, vertexColor, array_size(vertexColor));
Draw(va);

And that's basically plain old ugly C! How does this fare?

The code is obviously more verbose, true. But it also tells exactly what's going on without the need of -any- global information, in fact we're hardly using types at all. We don't have to look around or to add comments, and we can exactly imagine from the call site the logic behind the implementation. 
It's not "neat" at all, but it's not "magical" anymore. It's also much more "grep-able" which is a great added bonus.

And now try to imagine the implementation of both options. How much simpler and smaller the C version will be? We gained clarity both at call-site and in the implementation using a much less "powerful" paradigm!

Other tradeoffs could be made, templates without the overloading would already be more explicit, or we could use a fixed array class for example to pass data safely, but the C-style version scores very well in terms of lines of code versus computation (actual work done, bits changed doing useful stuff) and locality of semantics (versus having to know the environment to understand what the code does).

An objection could be that the templated and overloaded version is faster, because it knows statically the sizes of the arrays and the types and so on, but it's quite moot. The -only- reason the template knows it's really because it's inline, and it -must- be. The C-style version offers the option of being inline for performance, or not, if you don't need and want the bloat.

It's true that the fancy typed C++ version is more safe, and it is equally true that such safety could be achieved with a better type system. Not -all- language innovations carry a burden on the mental model of program execution.

Already in C99 for example you could use variable length arrays and compound literals to somewhat ameliorate the situation, but really a simple solution that would go a long way would be array literals and the support of knowing array sizes passed to functions (an optional implicit extra parameter).

Note that I wrote this in C++, but it's not about C++, even in metaprogramming environments that are MUCH better than C++, like Lisp with hygienic macros, metaprogramming should be used with caution.
In Lisp it's easy to introduce all kind of new constructs that look nifty and shorten the code, but each time you add one it's one more foreign syntax that is local to a context and people have to learn and recognize. Not to be abused.
Also, this is valid for any abstraction, abstraction always is a matter of trade-offs. We sometimes forget this, and start "loving" the "beauty" of generic code even if it's not actually the best choice for a problem.