Day 50: Refreshable Braille Displays

Today marks my halfway point in learning. 50 days down (total of 72 hours study time), 50 more to go! So far, I’ve managed to cover swaths of WCAG, ARIA, and ATAG documentation. Additionally, I’ve learned about JavaScript techniques to better support screen readers when it comes to custom widgets. During this time, I’ve also managed to experiment with some of the popular screen readers (VoiceOver, NVDA, and TalkBack).

On that note, I’m curious about braille output. I’m very familiar and comfortable with speech output from screen readers, but am less so with refreshable braille displays. Unfortunately, I don’t currently have access to a refreshable braille display (not that I could read it, even if I did), but that won’t stop me from learning about them online.

Things I accomplished

Watched on YouTube:

Read:

What I learned today

  • Refreshable braille displays come in many shapes and sizes, some with input options, too!
  • Refreshable braille displays can be hooked up wirelessly, like to an iPad, but not all computers/devices support wireless connection.
  • One-line braille displays can greatly limit how information is conveyed to a user; spatial information given in tables and charts can be especially challenging.
  • Android support for Braille is BrailleBack.
  • Braille comes in two forms: contracted and uncontracted. Contracted is more advanced and allows for shorthand, of sorts, like abbreviations and contractions.

Day 40: Practice with NVDA

Today I found myself doing a lot of testing with the NVDA screen reader on PDFs at work. So, I made a diversion from my plan to code a custom modal tonight, instead to document what I was learning throughout the day.

I should note that this was not my first encounter with NVDA. It has been my go-to screen reader for testing webpages during the past year. On that note, this post does not go in-depth about all the things NVDA, but rather points out things I hadn’t spent time learning during these quick experiments I’ve done in the past.

Things I accomplished

Review of shortcuts I knew

Task Shortcut
Start reading continuously (from this point on) Insert + Down Arrow
Stop reading Ctrl
List all headings, links, and landmarks Insert + F7
Next line Down Arrow
Next character –or– Next input option (radio buttons, selection list) Right Arrow
Quit NVDA Insert + Q

What I learned today

  • I’ve only used NVDA for listening, but forget that Braille input/output is possible. NVDA supports Braille.
  • NVDA stands for NonVisual Desktop Access.
  • Control + Alt + N only works to open NVDA when a desktop shortcut has been created.
  • If you are running NVDA on a device with a touchscreen (Windows 8 or higher), you can use NVDA with touch commands.
  • Pause speech with the Shift key, as opposed to stopping speech with the Control key.
  • NVDA can navigate you to the next blockquote with the Q key.
  • NVDA has commands to read an entire open dialog (Insert + B) or just the title of the dialog (Insert + T).

On top of all these new shortcuts and tidbits, I was reminded that I am not a screen reader user. When I was trying to solve a “problem” within a remediated PDF document, I finally concluded that it wasn’t the document’s problem, but rather the way I was using NVDA as a novice screen reader user. Listening to the PDF with JAWS, which gave me the results I expected, I decided to abandon the issue I thought the document had. In doing so, I’m relying on the fact that I am not that user, and the appropriate tags given to the document would allow real screen reader users to make their own decisions while still being able to access all the content within this document.

Day 37: How Well Do Browsers Play with Assistive Technologies?

This week I’m moving into the WAS Body of Knowledge section “choose accessibility techniques that are well-supported”. Most of these topics I’ve had some experience with and even preached about myself. For instance, adhering to coding standards and building with progressive enhancement in mind are two concepts I firmly believe can eliminate a lot of problems. I also understand that testing across platforms, browsers, and assistive technologies is important in order to discover what unanticipated barriers might occur, despite coding to standard.

That being said, today I focused on learning about what combinations of browsers and assistive technologies have been tested to work the best together. I know a bit about screen reader and browser combinations, but I’m certain there is more to learn than the base knowledge I have.

Things I accomplished

What I learned today

Here’s what I learned:

  • VoiceOver (VO) on macOS works mostly well with Firefox, but VO used with Chrome has limited support. Naturally, VO works best with Safari.
  • On that note, only Talkback works the best with Chrome. Other screen readers experience limited support or some support with exceptions. Oddly enough, even ChromeVox has some exceptions.
  • Edge does not support Dragon or ZoomText, and yet Internet Explorer (IE) does. As a matter of fact, IE is recommended for use with these two technologies.
  • Edge has the most support (with exceptions) for Narrator.
  • JAWS has been recommended for a while to be used with IE, but Firefox is a second close as of recently.
  • NVDA still plays best with Firefox.
  • Firefox and IE differ in visual focus, so both should be tested for this.
  • Likewise, video and audio elements differ across browsers, so those should be tested across browsers, too.
  • IE and Firefox are the only browsers that support Flash and Java accessibility.
  • ChromeVox uses to DOM to access content for the listener, rather than other screen readers that access an accessibility API or combination of API and DOM.
  • Level Access has a wiki on iOS Accessibility Issues.
  • SAToGo is another screen reader that works on Windows.

One of my favorite resources is canisue.com when checking for support across browsers. Choice of elements can really matter in cases where IE doesn’t support all HTML5 elements, including dialog. This resource alone has taught me so much about browser support for standards as I’ve worked through projects. In this vein, HTML5 Accessibility is another useful site.

One thing to remember is that following standards (like WCAG) are your best bet. Aiming for specific AT or browser support is not a good approach since updates can be made and support between the two can change.

Note: when reading through the Level Access wiki about AT support by browsers, these were for most popular browsers. Other browsers like Opera were not mentioned.

Day 34: TalkBack on Android

A diversion from my quality assurance research this week out of necessity of testing with an Android screen reader at work. Time spent today: 2 hours.

Things I accomplished

What I learned today

  • TalkBack wasn’t too different from my experience with VoiceOver gestures. Some minor gesture differences.
  • The equivalent functionality to VoiceOver’s rotor is the Local Context Menu.
  • TalkBack quick access can be updated to a triple-click of the Home button of my S5 to turn it on.
  • TalkBack keyboard events are not the same as touch events. It can be hard to develop for all TalkBack users (some keyboard users, some touch users).
  • 29.5% of respondents to WebAIM’s screen reader survey said they use TalkBack.
  • A two-finger or three-finger swipe navigates me through my multiple screens.
  • The “explore by touch” feature reads focusable items as I drag my finger around the screen.
  • Entering my PIN was easier for me to enter with TalkBack then it was with VoiceOver.

Day 24: Better VoiceOver Practice

Today I came back to practice using VoiceOver (VO) a bit more, since I was struggling with it yesterday. More practice definitely gave me more confidence. It would take a week of consistent use for me to use VO more naturally with my laptop. That’s an aspiration for the near future.

Things I accomplished

  • Walked through all 22 steps of the built-in VoiceOver Quick Start tutorial.
  • Read Chapter 1, 2, and 6 of Apple’s VoiceOver Getting Started Guide.
  • Added keyboard shortcuts to my study spreadsheet.

What I learned today

  • VO has a Trackpad Commander option. This meant that I could use some of the same gestures on my MacBook Pro (MPB) trackpad that I use on my iPhone! This was an important discovery for me, offering me cross-device ease of use.
  • Control + Option + Spacebar selects my choice for interactive components like checkboxes, radio buttons, buttons, etc.
  • I finally got the hang of stepping in and out of different components and windows by using Control + Option + Shift + up/down arrows. For some reason, my brain struggled with this yesterday.
  • Control + Option + D gets me quickly into the dock of my MBP.
  • Control + Option + M goes directly to my MBP menu.
  • Control + Option + K opens keyboard help. When open this will explain what keys do when a key is pressed while holding down the Control + Option keys.
  • Control + Option + H + H opens up a Command help dialog, which lists all the different keyboard shortcuts for specific commands and tasks.
  • Web Spots is a generated list of areas of the current webpage based on VoiceOver’s interpretation of the page’s visual design.
  • Control + Option + ; (semi-colon key) locks the VO modifier keys so you don’t have to keep holding them for shortcut commands. This was a big deal to learn! It seemed ridiculous to keep holding down 2-4 keys at a time while pressing another key.
  • Control + Option + Shift + I creates a verbal overview of the page, including how many headers, links, landmarks, etc.

Day 23: VoiceOver for macOS

Needing to take a break from reading through so much documentation, I decided to spend some time with some assistive technology. Specifically, I practiced navigating with VoiceOver (VO) on my MacBook Pro. Turns out that it was more a challenge than I anticipated!

Things I accomplished

What I learned today

  • I surprised myself be feeling more out of water on my MBP then I did with my iPhone when using VoiceOver. I’ve become fairly familiar with NVDA on Windows, so I really felt like I was having to relearn navigating with a screen reader.
  • Control + Option + U opens the rotor.
  • Sometimes using VO felt complex when having to hold down 4 keys to “quickly” navigate a webpage.

VoiceOver on macOS resources

Day 7: Learning VoiceOver on iOS

While I’m learning about how how some users perceive and operate pages, I think it’s a good time to start diving into some of the assistive technology (AT) that people use. A week ago I had a more cut and dry approach to the WAS Body of Knowledge, as seen on my Google calendar, but now I’m discovering a more natural overlap of ideas as my own curiosity grows and I attempt to apply these ideas in a useful and concrete way.

To start, I spent time with Apple’s VoiceOver (VO) on my iPhone. Why this particular AT first? Honestly, I needed it during work today to test and share with others. Secondly, it’s one I have the least familiarity with. It’s hard to go against how I normally use my iPhone! Yet, enough motivation and time has slowly brought me around and made me also think more on how this AT can lend itself to benefiting everyone in the screen-less world we are presumed to be heading into.

Things I accomplished

  • Successfully navigated my iPhone using VoiceOver without looking at the screen or giving up too easily out of frustration.
  • As part of my bi-weekly duties at work, wrote a short accessibility blurb meant to introduce librarians, archivists, and museum staff to VO on iPhone (not yet published at the time I’m writing this).

What I learned today

  • VO has a screen curtain mode that turns the screen off, while the functionality continues. This offers added privacy or could save battery power. Careful! I scared myself, not being able to toggle my screen back on. Fortunately, I’d practiced navigating beforehand and listened my way through Settings to turn VO off, which resolved the issue. Phew!
  • Practiced at least 5 ways to interact with my iPhone using VO:
    Gesture Desired Behavior
    Swipe down with 2 fingers Start reading continuously (from this point on)
    Tap with 2 fingers Stop reading
    Rotate both thumbs in sync (like a dial) Scroll through rotor for list settings and navigation options.
    Tap with 1 finger Select item
    Double-tap with 1 finger Choose item or activate button
  • I can comfortably listen to VO at a speed of 60%.

Food for thought

  • It seems to me the visually impaired have to sacrifice a bit more of their privacy just to accomplish what people, who don’t use screen readers, can accomplish on a daily basis (everything is read aloud).
  • Imagine if all of us practiced using VO. There could be benefits to not looking at your screen at every task, giving verbal commands for tasks, and gaining empathy and insight into how others interact differently with the same world.
  • I actually appreciated listening to, as opposed to looking at, some things. For instance:
    • small icons are often problematic for me to see, but VO suddenly brought clear definition to emojis and indicators that I may have only guessed at before,
    • I could listen to my email as I walked on my break, and still keep alert for what was around me, and
    • I could perform some tasks on my phone, without looking the screen lighting up the whole room, while my son drifted off to sleep tonight.

How you can learn to use VoiceOver on iOS