After PC Gamers published this article on color blindness and games, I was curious to see what could be done and how, to target better the 5% (8% among males) of gamers affected with this deficiency. The original article didn't link any research or implementation hint but came with a note saying that it should be trivial to do... As most games nowadays do some form of color correction, often via volume color textures, you would think it's not hard to bake a global color transform that maps better the RGB space into what can be seen by colorblind persons.
Well, indeed, it is. Most papers seem to reference as a starting point a project report by Fidaner, Lin and Ozguven, "Analysis of Color Blindness" which derives a simple linear transform by going from RGB to LMS color space, simulating color blindness by losing one of the receptors in the LMS space, computing the difference between the two images and feeding back this difference by adding colors that can be perceived instead.
The algorithm is so simple it's easier to read the paper than my summary of it. Past this simple mapping, all the research I've found improve on the mapping by adjusting for the characteristic of the image you need to convey, which is not only more expensive but also could be not ideal in our case, as the image contents are changing frame to frame. I wonder if a nonlinear transform could improve the situation, but I haven't found much about static, global color transforms other than the aforementioned work.
- The website daltonize.org has implementations of the linear transform in many languages.
- Some papers seem to do the RGB->LMS (and viceversa) conversion in gamma space (including the Analysis of Color Blindness one), while some other don't. The confusion I guess comes from the fact that there are many RGB spaces, and not only sRGB with its gamma 2.2ish transfer function. From what Wikipedia says, CIEXYZ to LMS is a linear transform, and keeping in mind that sRGB to CIEXYZ is not, we have to gamma/degamma. I've also found this paper (comes with sourcecode) which makes conversion between sRGB to LMS an even less trivial matter.
- There are some variants of the original daltonization algorithm. In particular, this paper, proposes (among other things which are less relevant) the use of a modified error matrix (formula n.5).
- If you wanted to spare some GPU cycles, it's possible to feedback the error term computed by the daltonization in other post-effects, to locally enhance contrast of areas between two similar colors.
- This paper illustrates the concept.
- You could feedback into an existing, suitable effect (i.e. bloom)
- You could trade off a post/processing step for this, for example, remove DOF or motion blur and add an "unsharp mask" filter guided by the error. A way of doing this is to compute an unsharp mask, in a single pass, for both the regular and color-blind simulated colors, and then feedback the error (contrast loss) in the image.
- A color-blind simulation mode could at the very least, help UI designers with their job.