I am red-green colorblind, and I’ve always just accepted that I’ll have a hard time with certain UI’s. I’ve never really thought it worth the time to design for the colorblind. Although recently I noticed Trello has a colorblind mode where you can use patterns as well as colors to denote labels...that is a simple yet effective way and has helped me a lot!
Certainly would be nice to see more of this in the design thought process.
Just anecdata, but I work on business software and a lot of effort is spent by UX and Engineering addressing color blindness / visual impairment. All color palettes are chosen with this in mind, screen readers require lots of work across disparate stacks, TTS engines, etc. Just wanted to relay that there are those who try to make a difference!
even if they're probably never exact, maybe could be (made) good enough so they could be used for some kind of automatic testing..?
By that I mean, measure the contrast of each pixel with neighbouring pixels as they appear "normally", and then compare that with the neighbour contrast it would have for a type of color blindness: if there is a lot of contrast normally, but not with color blindness, add that up for a "problem score" and/or mark it in the image visually. Optionally, only care about images and text, but not decorations. And more things like not just taking direct neighbours into account, etc. I'm a total noob when it comes to all that, but I've seen enough to think this might be easy in the big scale of things. The trickiest part is probably good models for how things look to colorblind people (on a perfectly calibrated monitor..)
Personally, even though I am not colorblind, I would love to at least try how my own things would come out under the lens of such a tool(suite). Just like think working with grayscale is interesting in its own right, making pretty things that discernible for all types of colorblindness, and still look okay in full color, sounds like what might be a cool challenge, who knows what could make with that kind restriction (e.g. that colorblind mode for Trello looks sweet, I would enable that in a heartbeat). And even if I find there's some compromises I just don't want to make all the time, then at least it would be an informed and conscious decision.
Windows 10 has global colour filters that make this super easy. Windows key + Ctrl + C toggles it on and off, and you can change which filter to use in Settings → Ease of Access → Colour Filters.
In addition to greyscale, there are colourblindness filters for red-green (green weak), red-green (red weak), and blue-yellow.
I have written some internal tools that do things in this vein. Open sourcing libs for this is a good idea – I've added it to my list of "stuff i would like to do at work if i get a minute".
You're correct that actually transforming and comparing these color values is technically straightforward.
It has some tedious edges, though. If anyone else is interested, here's the hard/annoying stuff:
1) What do you sample, and how do you sample it? We let WCAG tell us what instances to test, easy enough. But actually pulling all the pieces out of the UI? You need to be able to tell what text corresponds to what backgrounds, etc etc. This depends on the UI environment you're testing. Most of these relationships are inferred easily-ish in HTML. Harder in a language where you're doing more basic operations like inserting rects and text objects and arranging them without markup-type nesting. And all that assume that you can actually grab these objects in a form where you can directly ask their properties. You wanna do this for static images? Ouch.
2) You have to have a few kinds of "primitives" or basic object types. Fills, strokes, icons, text – for two reasons. One, it's easier to write case-specific code that reads them from objects rather than generalized tools that pull color from any of the handful of properties that might have color in them. The other is that WCAG has different optimal contrast ratios for different kinds of things. Text has its own guidelines, and the requirements changed based on things like size and boldness, so you need to do a few kinds of tests with varying rubrics.
3) Multiple properties. Some objects have multiple fills assigned, or a fill on multiple properties, where only one of those properties is rendered based on context. Those need to be handled. Say you're supposed to measure the contrast of text on a background. But that background is transparent, so the effective background color of the text is the result of two colors mixing. Your tool needs to know this relationship, which often means traversing waaay up the DOM/element tree in the case of things like page backgrounds. And then it needs to be able to calculate color combinations and test against those. if this relationship includes more than one stacked transparent object – well, first, wtf is your UI – but you've gotta handle all that.
4) How does it get this information? The most automated way is with a scraper. Lots and lots of edge cases to write for. The simpler way is with something that looks more like unit tests. Writing manual tests at the component level. The upside here is that you're able to be more explicit about which properties are targeted, and manually declare the relevant relationships between objects. The downside is that this is hellish to write, worse to maintain – crazy brittle and does not scale at all.
Visualizing this info is cool. Overtop the UI is one way, in a big consistent dataset is helpful for other things too. Think AWS status page for the accessibility of everything in your UI library.
One of the more popular things in this space, to get a feel for how it might work (ugly but a start):
http://wave.webaim.org/extension/
Some games just let you change specific meaningful UI colors arbitrarily (maybe with some presets for common forms of color blindness). I feel like that's probably the best route for the color side of things.
Certainly would be nice to see more of this in the design thought process.