When I was working on a team with an accessibility requirement. The bulk of the work wasn't coloring work, nor writing alternative texts, but ensuring that the website could be navigated well using a screen reader.
We used JAWS[0] and tested every change and new feature with it to ensure users of it could make sense of the website. This was a lengthy process that took a lot of effort and learning on the part of devs to pick up JAWS.
The tough part of it was it being a subjective process. No one can tell you if the final experience was "correct" or not.
This part of web accessibility isn't mentioned as often, but its just as important. Likely because not many people really know about it.
Edit: I think part of it is the high cost of products like JAWS. The current price of a home license is $1,000. A business license is $1,285.
Most accessibility shops and devs tend to use the free NVDA screenreader[0] for accessibility testing because of the costs associated with JAWS.
There are differences between the two (and the other well used screenreader VoiceOver) but NVDA is a damn good product with a decent user base that certainly helps in finding potential hiccups in complex frontend applications. Usually this happens when one of the screenreaders hits something on a webpage that triggers 'interaction mode'[1]
Edit: Interestingly the most recent data I found[2] seems to indicate that NVDA has passed JAWS as the most used screenreader, so might be worth using as the default for testing
The Narrator screen reader built into Windows is also not bad these days, with recent versions of Windows 10. You can turn it on with Ctrl+Win+Enter. And while I'm here, if you're on a Mac, you can turn on VoiceOver with Command+F5. Both Narrator and VoiceOver have built-in tutorials.
Disclosure: I was a developer on the Windows accessibility team at Microsoft for a little over 3 years, focusing on Narrator.
This lines up with my experience working on product features for customers that have accessibility requirements. From what I’ve seen, most larger companies that care about accessibility (usually for fear of lawsuits) have internal folks testing with NVDA specifically. This is another great reason to use it for your testing because you can be confident the odds of someone checking your work with NVDA are good.
1. >>> but ensuring that the website could be navigated well using a screen reader. <<<
That was my experience too
2. >>> The tough part of it was it being a subjective process. No one can tell you if the final experience was "correct" or not.<<<
Also ran into this problem. Our solution was to hire a non-sighted person (who uses screen reader in their daily/everyday life) to test the Apps. It made me realize that 'expert' users of screen reader software do not use it the way a sighted user like me would think it's to be used.
3) We only use the non-sighted person when we do 'major' overhauls or once every few years when we do a comprehensive round of Accessibility testing. For the accessibility tests we do as part of releasing any new feature (no matter how small), I've had to train myself and learn better how to do screen reader testing. Still not close to being an expert but I'm much better at it than a few years before.
In situations where an expert isn't available, do you think visual tools such as Lynx (text mode browser) might be more suitable for assessing the screen reader experience than actually using a screen reader?
No. A large part of modern web accessibility work is using ARIA to bind assistive tech APIs to applications. Lynx won’t give you anything unless you’re writing flat HTML sites.
I went on Facebook and looked for JAWS support and user groups.
I posted a polite request for volunteers to participate in a user study of foss community building software. A couple of volunteers came forward.
I supplied them with a URL to a standard build and a list of tasks covering the basic feature set -- read topics list, find particular topic, particular user, apply tags, register, post something under account, sign out, look up stats.
Because I had already tested with Netscape, IE, textmode, NoJS, keyboard-driven, and because I'd already conducted several revealing user tests with novice users, the screen reader tests passed with flying colors on the first try and with no reported issues.
That doesn't mean my work is done, and I'll have to keep re-testing as I continue development. However, I think it illustrates well the benefit of trying to support not just 10% browsers, and not just 1% browsers, and not just 0.1% browsers, but every single configuration you find or can imagine.
Of course, if your goal is development speed and such, the top most common refrain I hear when presenting this point of view, this may not work for you, and I wish you the best of luck, it's not for everyone.
But if you're writing for longevity and accessibility, this is the routr I'd recommend.
Orca is hardly ever used, however, if that's what you have available, trying it is still miles ahead over not doing anything. There's a lot of things you could do to make an OK experience better, but the highest bang for the buck is making a broken experience at least work - and any screen reader will be able to help you discover that some part of your app is completely unreachable using the keyboard when navigating using the screen reader.
In theory, if you make a website work with one screen reader, it should work with all of them. Of course, in practice, just as with browsers, there are corner cases where screen readers differ in their support or interpretation of the standards.
My guess is that if you make the site work with either Orca or NVDA (an open-source Windows screen reader), it will work with the other, and with JAWS. So if I were you I wouldn't bother to set up JAWS and test with it.
This is the case for some accessibility issues, but at the edge cases the differences between screen readers is significantly more apparent than with browsers and it's also significantly easier to hit edge cases with screenreaders. This is almost entirely due to screen readers each having their own idiosycratic solutions to what they do when they encounter a section of a website that sends them into 'interaction mode'. WAI-ARIA was designed to help in this regard, but the actual implementations don't seem to be help to the same strict standards that browsers are. This is the case for pretty much any complex frontend widget and can really only be solved using a specific screenreader to determine exactly what it's going to do once it encounters and needs to interact with your complex UI.
The GPL'ed 'Emacspeak' developed by T V Raman, who is blind. Actively developed since 1995. Seems quite advanced? Idk anything about it, seems to need Emacs?
I asked an accessibility expert in Australia this question and he said orca rated very lowly. Better than browser extensions, but nowhere near as good / frequently used as Jaws/NVDA.
I'm sorry to hear that it was a lengthy process for your team. What do you think took more time: learning to use the screen reader, or fixing your site?
I joined the team when accessibility had been an existing requirement for over a year. There was one team member who was very passionate about it and kept things up to a high standard. So thankfully I didn't jump into a mountain of work. It took me a few weeks before I felt I could do things without looking up commands every few minutes. Maybe a few months in I could navigate most of our apps without thinking.
This article isn’t all bad, but it’s not particularly high quality either.
<img src=”baby_elephants.png” alt=”A group of two baby elephants walking behind their mom in a open field.”>
… followed immediately by a demonstration baby elephant image with empty alt text (which in a case like this is worse than no alt attribute, because it tells accessibility tech that the image is purely decorative and should not be announced).
(Note also the use of ” instead of ".)
This is not encouraging.
> You can add a landmark by using the role attribute. Some examples of landmarks are banner, form, and search. Something to note is that you should only place the role attribute on elements like <div> and <span>, do not place on elements that have semantic meaning like a <ul> or <p>.
There is nothing wrong with overriding the role of an element with semantics; it purely comes down to whether you do it correctly. And the thing that they don’t mention here is that the converse applies at least as much: if there’s a built-in element with your desired semantics, you should use that instead of a role, so the example is a bad example: its <div role="navigation"> should be <nav>.
If you prefer reader mode, like I do, any Medium-based platform is a PITA cause all images are blurred. Anyone who is on a crappy connection will have them same. What a fuxxing shame! Can we stop using this crappy platform please?
Let’s, but really do you care about the images or are you there for the article content :) reader mode is nice to skip the silly “pardon the intrusion” or “sign up to our newsletter spam” modals, and i don’t think I’ve ever missed any photos or images when reading stuff like that. These days I tend to just avoid content on medium.
One problem here: line height should be unit-less. Use just a number and that number will become a multiple of the font value so font-size 2rem line-height:1.3 don’t use units
This was a major stumbling block for me until I read about it on MDN. My blog's longish titles were always spilling over into the next line until I removed the em unit. Serves me right for not RTFM-ing and thinking I know enough HTML to cobble together a basic site.
Something not mentioned that a ton of modern websites fail at is having readable text due to either too-low contrast, too-thin or too-small fonts, or all of those things combined. These days it seems like I have to use reader mode for half the articles or blog posts I read daily.
We used JAWS[0] and tested every change and new feature with it to ensure users of it could make sense of the website. This was a lengthy process that took a lot of effort and learning on the part of devs to pick up JAWS.
The tough part of it was it being a subjective process. No one can tell you if the final experience was "correct" or not.
This part of web accessibility isn't mentioned as often, but its just as important. Likely because not many people really know about it.
Edit: I think part of it is the high cost of products like JAWS. The current price of a home license is $1,000. A business license is $1,285.
[0] https://www.freedomscientific.com/products/software/jaws/