Day 40: Practice with NVDA

Today I found myself doing a lot of testing with the NVDA screen reader on PDFs at work. So, I made a diversion from my plan to code a custom modal tonight, instead to document what I was learning throughout the day.

I should note that this was not my first encounter with NVDA. It has been my go-to screen reader for testing webpages during the past year. On that note, this post does not go in-depth about all the things NVDA, but rather points out things I hadn’t spent time learning during these quick experiments I’ve done in the past.

Things I accomplished

Review of shortcuts I knew

Task Shortcut
Start reading continuously (from this point on) Insert + Down Arrow
Stop reading Ctrl
List all headings, links, and landmarks Insert + F7
Next line Down Arrow
Next character –or– Next input option (radio buttons, selection list) Right Arrow
Quit NVDA Insert + Q

What I learned today

  • I’ve only used NVDA for listening, but forget that Braille input/output is possible. NVDA supports Braille.
  • NVDA stands for NonVisual Desktop Access.
  • Control + Alt + N only works to open NVDA when a desktop shortcut has been created.
  • If you are running NVDA on a device with a touchscreen (Windows 8 or higher), you can use NVDA with touch commands.
  • Pause speech with the Shift key, as opposed to stopping speech with the Control key.
  • NVDA can navigate you to the next blockquote with the Q key.
  • NVDA has commands to read an entire open dialog (Insert + B) or just the title of the dialog (Insert + T).

On top of all these new shortcuts and tidbits, I was reminded that I am not a screen reader user. When I was trying to solve a “problem” within a remediated PDF document, I finally concluded that it wasn’t the document’s problem, but rather the way I was using NVDA as a novice screen reader user. Listening to the PDF with JAWS, which gave me the results I expected, I decided to abandon the issue I thought the document had. In doing so, I’m relying on the fact that I am not that user, and the appropriate tags given to the document would allow real screen reader users to make their own decisions while still being able to access all the content within this document.

Day 39: Concepts Concerning Custom Modals

Today I started combing through code and underlying principles to create accessible dialogs (modals). The development of this pattern has eluded me in the past, so I wanted to tackle it first. I didn’t get as far as I’d liked (no coding on my part), but did squeeze in the time to at least review what others have done and why they did it.

Things I accomplished

What I learned today

  • I knew there was a dialog HTML element, but didn’t realize just how unequally supported it is across browsers. I see now why so many devs just use a div with ARIA role=”dialog” instead.
  • Once a dialog is opened, focus should immediately move inside the dialog, and an accessible name (aria-label, aria-labelledby, or aria-describedby) and dialog role should be announced.
  • When a dialog is open, the Tab key should not allow the user to get outside of the dialog box.
  • A user should be able to close the dialog with Esc, tapping outside the box, pressing a Close button, or even F6 (reaching the address bar).
  • When a dialog is closed, focus should return back to the element that initiated it.
  • Dialogs should be hidden with visibility:hidden.
  • Expected keyboard interactions within the dialog should be:
    • Tab: Moves focus forward inside the dialog
    • Shift + Tab: Moves focus backward inside the dialog
    • Escape: Closes the dialog

With all the effort that goes into a custom dialog, and the many shortcomings with only partially-reliable workarounds, it makes me wonder if dialogs are really the answer to some “problems”. I look at the usability aspect for all users, and wonder if dialogs really meet a user’s need, or rather, a designer’s and business need.

Other articles I read today

Day 37: How Well Do Browsers Play with Assistive Technologies?

This week I’m moving into the WAS Body of Knowledge section “choose accessibility techniques that are well-supported”. Most of these topics I’ve had some experience with and even preached about myself. For instance, adhering to coding standards and building with progressive enhancement in mind are two concepts I firmly believe can eliminate a lot of problems. I also understand that testing across platforms, browsers, and assistive technologies is important in order to discover what unanticipated barriers might occur, despite coding to standard.

That being said, today I focused on learning about what combinations of browsers and assistive technologies have been tested to work the best together. I know a bit about screen reader and browser combinations, but I’m certain there is more to learn than the base knowledge I have.

Things I accomplished

What I learned today

Here’s what I learned:

  • VoiceOver (VO) on macOS works mostly well with Firefox, but VO used with Chrome has limited support. Naturally, VO works best with Safari.
  • On that note, only Talkback works the best with Chrome. Other screen readers experience limited support or some support with exceptions. Oddly enough, even ChromeVox has some exceptions.
  • Edge does not support Dragon or ZoomText, and yet Internet Explorer (IE) does. As a matter of fact, IE is recommended for use with these two technologies.
  • Edge has the most support (with exceptions) for Narrator.
  • JAWS has been recommended for a while to be used with IE, but Firefox is a second close as of recently.
  • NVDA still plays best with Firefox.
  • Firefox and IE differ in visual focus, so both should be tested for this.
  • Likewise, video and audio elements differ across browsers, so those should be tested across browsers, too.
  • IE and Firefox are the only browsers that support Flash and Java accessibility.
  • ChromeVox uses to DOM to access content for the listener, rather than other screen readers that access an accessibility API or combination of API and DOM.
  • Level Access has a wiki on iOS Accessibility Issues.
  • SAToGo is another screen reader that works on Windows.

One of my favorite resources is canisue.com when checking for support across browsers. Choice of elements can really matter in cases where IE doesn’t support all HTML5 elements, including dialog. This resource alone has taught me so much about browser support for standards as I’ve worked through projects. In this vein, HTML5 Accessibility is another useful site.

One thing to remember is that following standards (like WCAG) are your best bet. Aiming for specific AT or browser support is not a good approach since updates can be made and support between the two can change.

Note: when reading through the Level Access wiki about AT support by browsers, these were for most popular browsers. Other browsers like Opera were not mentioned.

Day 35: A11y Verification Testing as Part of User Testing

Today’s mission was to reflect on user (usability) testing, and search for accessibility verification testing (AVT). AVT was new to me, and I had a harder time finding that exact word combination when searching the web. So, I ended up reading about accessibility user testing and manual testing a developer can perform on her own to emulate user testing.

Things I accomplished

What I learned today

  • Accessibility testing is subset of usability testing.
  • User testing for accessibility requires recruiting real users with disabilities. The rest is much like general user testing:
    • observing them in an environment familiar to them,
    • assigning tasks to accomplish,
    • observing unspoken actions,
    • scrutinizing results, and
    • conclude what changes need to happen.

Cool resource

Designing for Guidance (Microsoft) [PDF] offers tips about the varied learning styles that people have. With the approach of inclusive design in mind, this tiny booklet will make you think more critically when you develop a product or learning course for your large audience.

Day 34: TalkBack on Android

A diversion from my quality assurance research this week out of necessity of testing with an Android screen reader at work. Time spent today: 2 hours.

Things I accomplished

What I learned today

  • TalkBack wasn’t too different from my experience with VoiceOver gestures. Some minor gesture differences.
  • The equivalent functionality to VoiceOver’s rotor is the Local Context Menu.
  • TalkBack quick access can be updated to a triple-click of the Home button of my S5 to turn it on.
  • TalkBack keyboard events are not the same as touch events. It can be hard to develop for all TalkBack users (some keyboard users, some touch users).
  • 29.5% of respondents to WebAIM’s screen reader survey said they use TalkBack.
  • A two-finger or three-finger swipe navigates me through my multiple screens.
  • The “explore by touch” feature reads focusable items as I drag my finger around the screen.
  • Entering my PIN was easier for me to enter with TalkBack then it was with VoiceOver.

Day 33: A11y, UX, or Both?

Things I accomplished

What I learned today

Comparing accessibility and user experience, both have benefits for all, yet differ mostly by audience:

  • accessibility
    • audience: people with disabilities
    • intent: the targeted audience can perceive, understand, navigate, and interact with websites and tools (an equivalent user experience)
  • user experience
    • audience: anyone
    • intent: a product should be effective, efficient, and satisfying

Accessibility includes a more technical aspect (considerate of assistive technologies, for instance); UX is more principled in its approach.

Usable accessibility = a11y + UX.

Accessibility is just one aspect of the “universal web”.

Looking at accessibility a little closer, what makes a person disabled? We may think of someone with a disability as having a certified report by a doctor or proving an obvious physical or mental difference from our own. Yet disabilities are actually better defined as a conflict of a person’s ability with their environment. It puts us all on a spectrum, doesn’t it?

An accurate statement

“For people without disabilities, technology makes things convenient. For people with disabilities, it makes things possible.” – Judith Heumann, U.S. Department of Education’s Assistant Secretary of the Office of Special Education and Rehabilitative Services

Factoid resource

  • Section 255 of the Telecommunications Act of 1996: Fueling the Creation of New Electronic Curbcuts
    A timeline of IT innovations built for someone with disabilities, but made its way into mainstream tech use. To my surprise: the typewriter!

Day 32: Benefits of Designing for A11y

Things I accomplished

What I learned today

There are several benefits to starting out a design process with accessibility in mind, rather than catching it after production and resorting to remediation. Some of these benefits include:

  • a solid customer base due to an off-the-shelf accessible product,
  • saved money by building it right rather than redesigning over and over again,
  • minimized legal risk,
  • innovation and the challenge to solve real-world problems,
  • improved productivity,
  • improved diversity, and
  • improved corporate image and brand when accessible technologies and strategies are incorporated within their organization.

Day 29: Making Dynamic Content Perceivable

WCAG 2.1 Success Criteria (SC) 4.1.3 and 1.3.2 are just two good reasons that we, as designers and developers, should be mindful of how and where we add new content while a user is interacting with our website. Today I spent an hour to see how deep I could dig into the concept of making dynamic content on a page perceivable to people who use assistive technology. I didn’t get as far or learn as much as I’d hoped, but I have included in this post a few of the resources I found helpful during my search.

As an aside, one fun thing about this journey has been revisiting familiar websites and running across familiar names in the web accessibility circle.

Thing I accomplished

  • Searched for articles and videos about making dynamic content perceivable, as well as managing DOM order, which both seem to go hand in hand.

What I learned today

  • When adding or updating content, be sure it’s appended after the point of focus the user is at. That makes sense, as many users (not just screen reader users) will likely not go backward in the flow of content.

Resources

Day 28: Accessible JavaScript Events

Today I sought to learn about JavaScript events. Specifically, in the context of accessibility, I wanted to dig deeper into two ideas that can make or break the interaction of assistive technologies with websites and web apps:

  1. there should be no more than one event assigned to an element (some exceptions may apply)
  2. create device-independent event handlers

Thing I accomplished

Searched for articles and videos pertaining to proper use of event handlers to optimize accessibility.

What I learned today

As developers, we shouldn’t offer interaction with just one type of device or peripheral. Coding device-independent event handlers will open up the experience to a wider audience of users. We know to make our sites keyboard accessible, but we shouldn’t build just for keyboard users either. Examples of device-independent event handlers:

  • onFocus
  • onBlur
  • onSelect
  • onChange
  • onClick (when used with links or form elements)

The examples aforementioned are not unbreakable. Altering default behaviors can present problems.

While searching for articles, I was amazed to go back 10 years (or more!) on the topic of accessible JavaScript. My time in this field is still so fresh and new that I forget how long these conversations have been going on. A huge thank you to all who have initiated these conversations and built an education for the rest of us!

Resources

Day 23: VoiceOver for macOS

Needing to take a break from reading through so much documentation, I decided to spend some time with some assistive technology. Specifically, I practiced navigating with VoiceOver (VO) on my MacBook Pro. Turns out that it was more a challenge than I anticipated!

Things I accomplished

What I learned today

  • I surprised myself be feeling more out of water on my MBP then I did with my iPhone when using VoiceOver. I’ve become fairly familiar with NVDA on Windows, so I really felt like I was having to relearn navigating with a screen reader.
  • Control + Option + U opens the rotor.
  • Sometimes using VO felt complex when having to hold down 4 keys to “quickly” navigate a webpage.

VoiceOver on macOS resources