brhfl.com

The deceitful panacea of alt text

One of my favorite1 accessibility myths is this pervasive idea that alternate text is some kind of accessibility panacea. I get it – it’s theoretically2 a thing that content creators of any skill level can do to make their content more accessible. Because of these things (and because it is technically a required attribute on <img> tags in HTML), it seems to be one of the first things people learn about accessibility. For the uninitiated, alternate text (from here on out, alt text) is metadata attached to an image that assistive tech (such as a screen reader) will use to present a description of an image (since we don’t all have neural network coprocessors to do deep machine-learning and describe images for us).

This is all very good, if we have a raster-based image with no other information to work with. The problem is, we should almost never have that image to begin with. Very few accessibility problems are actually solved with alt text. For starters, raster images have a fixed resolution. And when users with limited vision (but not enough-so to warrant use of a screen reader) attempt to zoom in on these as they are wont to do, that ability is limited. Best case scenario, the image is at print resolution, 300dpi. This affords maybe a 300% zoom, and even then there may be artifacting. Another common pitfall is that images (particularly of charts and the like) are often used as a crutch when a user can’t figure out a clean way to present their information. Often this means color is used as a means of communicating information (explicitly prohibited by §508), or it means that the information is such a jumble that users with learning disabilities are going to have incredible difficulty navigating it.

Information often wants to fall into a particular structure. When I’m given a bar chart at work, in its original, non-rasterized form, I just structure it back into a table behind the scenes (in PDF). If you’re trying to communicate a message (particularly data), often part of the problem is that there’s a lot of information to communicate. This requires further structuring, and alt text is ‘flat’. By this, I mean, it lacks the capability to be structured – it’s generally restricted to paragraph breaks, if that.

An anecdote: in my professional life, I requested a customer provide original or recreated (but non-rasterized) versions of infographics in a document, and ended up with one that, when pasted into Word, yielded two pages worth of text. I explained to the customer that this was far too much content for alt text, for the reasons already mentioned. She responded that her ex-husband was blind, and how she had written it was exactly how he would have wanted to hear her read it. She failed to understand that if he knew what he was hearing was irrelevant to what he wanted to hear, he could ask her to skip ahead to the relevant bits. She failed to understand that if he missed how this piece of information related to the bigger picture (think, header row and column in a table), he could ask her. She failed to understand that she was not a robot and he probably enjoyed listening to her talk more than NVDA.

And it is here that we come to a major pitfall of accessibility work in general. Folks think that it’s enough to provide information, without any consideration for how that information is structured (or not). Pages worth of descriptions of a table are not a suitable replacement for an actual table where data always has context available if necessary. Data always has a sense of where it exists in two dimensions. Navigating among sections is a godsend when you’re trying to get through massive amounts of complex data, and fluffy tangential details are simply a waste of time when you’re listening to a robot. This is all on top of the issues that exist for folks struggling with poorly-rendered or poorly-designed images despite not using a screen reader.

Alt text is not a panacea. If it is to be used, it should be concise and clear, while presenting all of the relevant information a sighted user would grasp. If this is not possible, the image should not be rendered as a raster image, period. Listen to your alt text in a screen reader. Try to find a specific data point. If you get lost, find another way to present the information. If you stick with your rasterized image, drop the highest resolution version in that you possibly can. Print resolution is a minimum; 72dpi is for abled folks. Don’t use color as an exclusive means to communicate or associate data with meaning. Learn to resist images, and when you use them, learn to embed them in inherently machine-friendly ways.


The delusion of accessibility checkers

There is a delusion that I deal with, professionally, day in and day out. That nearly any piece of authoring software, be it Microsoft Word or Adobe Acrobat, has some inbuilt mechanism for assessing the accessibility of a document. Before I get into the details, let me just come out and say that if you are not already an accessibility professional, these tools cannot help you. I understand the motivation, but in my experience, these things do more harm than good. They allow unversed consumers to gain a false sense of understanding about the output of their product. That sounds incredibly condescending, but that’s honestly how it should work when you’re talking about fields that require extensive training.

The ultimate problem is that accessibility comes down to human decision. Accessible tech is, thus far, ‘dumb’ — it almost exclusively responds to cues embedded by a human. Some day, I believe that AI will be good enough to largely make the decisions that currently necessitate a human. But we aren’t there yet. A good analogy, but one that only gets through to a select group, is that of a compiler throwing warnings and errors. I can’t just type make me a roguelike but where all you do is find cute clothes in treasure chests into gcc and get my game to come out — the computer is not that smart. I could program dresscrawler myself, and attempt to compile it. gcc might throw errors — these are just things that make it incapable of compiling. gcc might throw warnings — these are just things that it thinks could be problematic, or are outside of accepted style. gcc can’t stop me from making dresscrawler a game where the player can spawn inside an inescapable room with no exits. gcc can’t guarantee I even remembered to code dresses into dresscrawler. gcc can’t even ensure that I didn’t create some kind of stack overflow that would cause a fatal exception when I place my jeggings of holding inside my purse of holding.

Perhaps a faultier, but more lay explanation would be that of purchasing a meal. If I go to a local restaurant, and I order a veggie burger, and it’s awful… I don’t actually know why. I might have a vague idea, that it’s too sweet, say. But why? Did they add too much beet? Did they sweeten it to make up for something else? With what? Sugar? Glucose syrup? Some bizarre synthetic? I don’t have all the information; I cannot make accurate conclusions. All I can make are vague suggestions, and this is how accessibility checkers (and compilers) work.

If I knew how to make the perfect veggie burger, I’d just do it myself. If the computer was smart enough, it would program dresscrawler for me (and, oh, how I wish it would). And if the computer was smart enough, none of what I do for a living would even exist! This is a human job because, for the time being at least, it requires human decision-making based on human experience. I am very engaged with the accessibility community, the disabled community, as far as current best practices. But these are still judgment calls. Some things are explicitly laid out — WCAG 2.0 put forward mathematically acceptable contrast levels, for example. But a lot of accessibility work still boils down to human decision-making, based on ‘how would I want to experience this, if I was experiencing it differently?’

Universal digital accessibility is incredibly difficult to do right1. But it’s so detached from the reality of most abled people, that they trust relatively naïve algorithms will absolve them. I’ve tried to get customers to use JAWS or NVDA just to experience the misery of a poorly-structured document, and… they often don’t even last long enough to care. I honestly wish vendors would stop bundling these half-baked accessibility checkers into their authoring software. Come see me once your approach involves neural networks.


Tagging in Acrobat from the keyboard

Since much of my work revolves around §508 compliance, I spend a lot of time restructuring tags in Acrobat. Unfortunately you can’t just handwrite out these tags à la HTML, you have to physically manipulate a tree structure. The Tags panel is very conducive to mouse use, and because Adobe is Adobe, not very conducive to keyboard use. Many important tasks are missing readily available keyboard shortcuts, and it has taken me a while to be able to largely ditch the mouse1 and instead use the keyboard to very quickly restructure the tags on very long, very poorly tagged documents.

A couple of notes – this assumes a Windows machine, and one with a Menu key2. While I generally prefer working on MacOS, I’m stuck with Windows at work, so these are my efficiencies. Windows may actually have the leg up here, since the Acrobat keyboard support is so poor, and MacOS does not have a Menu key equivalent. Additionally, this applies to Acrobat XI, it may or may not apply to current DC versions. Finally, all of this information is discoverable, but I haven’t really seen a primer laid out on it. If nothing else perhaps it will help a future version of myself who forgets all of this.