Day 63: Practice with JAWS

Back to playing with assistive technology. I wanted to mess around with speech input software, but I started the process and realized that will be a weekend project, due to the learning curve. So I settled on working in JAWS today to continue learning assistive technologies and what experience they provide.

Note: JAWS is really robust and considered top-notch in the screen reader industry. By no means, am I an expert at using JAWS. However, I need more practice with it, since I lean more heavily on NVDA for screen reader testing.

Things I accomplished

What I learned today

JAWS stands for “Job Access With Speech”.

The cursor on the screen blinks REALLY fast when JAWS has been activated.

Some of JAWS basic keyboard commands are very similar to NVDA’s (or vice versa). That was extremely helpful when experimenting with it. That made me happy when thinking about one of my blind friends that recently made the switch from JAWS to NVDA. It likely made her transition a whole lot easier! (now I’ll have to ask her about it)

I used Insert + F3 a lot to move more quickly through a page’s regions and interactive areas. I liked how many options I had on their built-in navigation feature. However, I did accidentally discover a browser difference with Virtual HTML Feature when I switched over to my Firefox window to add notes to this post (colors are funny because my system was in High Contrast Mode at that time).

Firefox with Insert + F3:

JAWS Find dialog window.

IE with Insert + F3:

Virtual HTML Features dialog.

The built in PDF reader in IE didn’t seem to register any regions with the Deque cheatsheet, like NVDA with Firefox did. So I couldn’t quickly navigate between tables within the browser.

I really like how I could sort links by visited, unvisited, tab order or alphabetically! Plus, I could either go to link or activate the link as soon as I found it in this list.

JAWS Links List dialog.

JAWS had a few more customization choices than NVDA:

JAWS Settings Center dialog.

My bad: I inadvertently found a few photos on a site I manage that needs some alternative text because they are not decorative.

 

Day 62: Identifying A11y Issues for Voice Input Users

Speech input software is an assistive technology and strategy that people use when they have difficulty using a keyboard or mouse. This may include people with motor, visual, or cognitive disabilities. In the 21st century, it’s an excellent alternative for people in all walks of life.

Things I accomplished

Watched:

Read:

What I learned today

Windows 10 has built-in speech recognition?? It sounds like a combination of Cortana and Speech Recognition could be a cheap alternative to Dragon, but I’d need to experiment a bit with both to compare.

Apple has a Dictation feature. So, somewhat like Windows, a combination of Siri and Dictation could be used. I’ve avoided setting up dictation just because of the privacy flag that pops up when it asks permission to connect to an Apple server and learn from your voice over the Internet. Maybe I’m just paranoid and they all actually work that way?

Dragon offers some ARIA support, but it appears to be limited, and should be tested if relying on aria-label, specific roles, etc.

Love this catchphrase from the Web accessibility perspectives video:

“Web accessibility: essential for some, useful for all.”

Challenges that people who use speech recognition software face on the web:

  • carousels that move without a pause button
  • invisible focus indicators
  • mismatched visual order and tab order
  • mismatched linked image with text and alternative text
  • duplicate link text (e.g. Read More) that leads to different places
  • form controls without labels
  • hover only menus (MouseGrid can struggle accessing these)
  • small click targets
  • clickable items that don’t look clickable
  • too many links

Designers and developers should focus on WCAG’s Operable principle. In particular, Navigable guideline’s success criteria would apply here. If many of those success criteria are met with other users in mind, it will definitely be beneficial to speech recognition users, too.

In the past, I haven’t personally been interested in software, like Dragon, yet looking from an accessibility point of view, I’m ready to start testing with speech input technology to better understand how it works and affects people who rely on it when interacting with the web.

Day 61: Identifying A11y Issues for Users Who Magnify Their Screen

Things I accomplished

Read:

Watched:

What I learned today

Windows has a built-in magnifier, as does Apple, but it often isn’t always strong enough or robust to help everyone with low vision. Alternative magnification software includes:

For mobile, I knew Apple phones and tablets had zoom built in, but Android devices have magnification built-in, too.

Apple Watch has a zoom feature (YouTube)!

Trying to learn all things accessibility, I’m constantly having to rediscover keyboard shortcuts:

  • Windows Magnifier: Windows + +
  • Apple Zoom: Option + Cmd + 8

 

Never assume that two low vision people are alike. Everyone with low vision has their underlying reasons of why they struggle with that disability. The point is to add flexibility for their particular experience with low vision and the strategies they use to access content and services on the web.

Challenges people who enable magnification may encounter on the web:

  • text as images become blurry and pixelated when magnified
  • unclearly marked sections/landmarks can make navigation slow when a user only see a small portion of the screen and they’re trying to differentiate navigation from main content from a footer
  • headings that look too much like paragraph text
  • unclear link text
  • scrolling, flashing, or moving objects (carousels, I’m glaring at you again)
  • drawn out content that doesn’t provide a quick intro or conclusion at the beginning
  • horizontal scrolling
  • page content referred to by it’s position (e.g. “to the right”)
  • meaning is conveyed by color alone
  • forms with fields and labels that are not close together or positioned on one line together

WebAIM’s advice:

“The general rule when designing for low vision is to make everything configurable. If the text is real text, users can enlarge it, change its color, and change the background color. If the layout is in percentages, the screen can be widened or narrowed to meet the user’s needs. Configurability is the key.”

WCAG supports people with low vision through it’s perceivable principle. 15 reasons to consider designing to include low vision users who magnify their screen:

  • 1.1.1 Non-text content (A)
  • 1.3.1 Info and relationships (A)
  • 1.3.3 Sensory characteristics (A)
  • 1.3.4 Orientation (AA)
  • 1.4.1 Use of color (A)
  • 1.4.3 Contrast (minimal) (AA)
  • 1.4.4 Resize text (AA)
  • 1.4.5 Images of text (AA)
  • 1.4.6 Contrast (enhanced) (AAA)
  • 1.4.8 Visual presentation (AAA)
  • 1.4.9 Images of text (no exception) (AAA)
  • 1.4.10 Reflow (AA)
  • 1.4.11 Non-text contrast (AA)
  • 1.4.12 Text spacing (AA)
  • 1.4.13 Content on hover or focus (AA)

Day 60: Identifying A11y Issues for Touch Screen Users

Touch screen accessibility is something I haven’t spent much time thinking on. Maybe not ever, come to think of it. I know that touch screens usually require gestures for interaction (i.e. swiping or tapping), but I hadn’t thought about how that might affect others. I even know about alternative ways to interact with touch screens, like with switch devices, but I’m not all that familiar with the challenges it can bring.

So, today was a significant learning day to figure out how people could potentially be excluded from phone or tablet design or the apps that live on those devices.

Things I accomplished

Read:

Watched:

What I learned today

The first thing that comes to mind when I hear touch screen accessibility are the disadvantages of touch screens for specific disabilities. However, touch screens are actually quite beneficial for other disabilities that make it hard to use a mouse or keyboard.

Things to consider for touch screen accessibility for web design:

  • Sufficiently large touch target sizes, which benefits users with motoric and visual impairments, as well as everyone else [WCAG SC 2.5.5]
  • Simplified layout, generous white space, and intuitive design, which can benefits everyone
  • Allowing different screen orientations (portrait or landscape) to give the user a choice [WCAG SC 1.3.4]
  • Extra considerations for when the screen reader is turned on, since some gestures will change once activated
  • Alternatives for complex gestures must offered [WCAG SC 2.5.1]
  • Custom gesture events must have an alternative method for activation (e.g. clicking a button) [WCAG SC 2.5.6]
  • Motion-activated events (e.g. shaking the device) must have an alternative method for activation (e.g. clicking a button) [WCAG SC 2.5.4]

In short, don’t take away the choice from users and don’t make assumptions of how they use their device. Offer them lots of choices (alternatives for interaction) so they can continue to do what they do.

This list reminds me that not all touch screen device owners use the touch screen. Other interaction methods include voice commands, Bluetooth keyboard, and switch devices. Not all who use the touch screen will touch the device the same way I can. Additionally, not all touch screens receive the same type of touch when it comes to how the screen responds to touch (i.e. hand versus gloved hand or stylus).

Day 59: Identifying A11y Issues for High Contrast Mode Users

Ok, ok… so High Contrast Mode (HCM) isn’t explicitly listed in the WAS Body of Knowledge under the current section I’m in, but it’s an area of testing that is significant to me. I’m interested in seeing how my content looks when it undergoes transformation created by the system. And I wanted to take time to think about what other’s using it may experience and strategies they may have to use when something can’t be seen after that transformation.

Additionally, it’s such a cheap and easy way to test that I like to encourage other designers and developers to use it as well. It is not insignificant to the people who use your sites and might be using HCM or inverted colors.

One last thing I’d like to mention before sharing what I did and learned… I actually like using HCM on Windows. It has improved greatly over the past few years (I didn’t enjoy it when I first tried it). Oddly enough, a Dark Mode feature has been popping up more and more across applications and systems, so that has provided me with an alternative, too. I don’t use HCM on a regular basis, but I’ve used it for relief before in a bright office with bright windows and three bright monitors glaring around me. I experience light sensitivity, so it provides me with a solution to continue working at my computer without contributing to headaches.

Things I accomplished today

What I learned today

Something I always have to look up when I want to test with HCM is the keyboard shortcut to activate it: Left Shift + Left Alt + Print Screen. The same key combination will exit HCM.

Not all of my Windows custom colors come back to life after using High Contrast Mode. Weird.

Invert colors, which is completely different experience to me, on macOS can be activated by Control + Option + Command + 8.

HCM offers some personal customization of colors. I played with it some and settled on the High Contrast #1 preset that offered black background and yellow text. Then I tweaked hyperlink colors to stand out more in my browser (Firefox).

HCM benefits people with visual and cognitive disabilities, as well as people with environmental impairments. Some examples:

  • low vision
  • Irlen syndrome
  • bright environments like outdoors
  • low-light environments

Not surprisingly, WCAG comes into play here: SC 1.4.8 Visual Presentation. Yes, that’s inside the Perceivable principle!

The last point brought home the issue that we can never assume how someone else’s system is set up. Default HCM offers white text on black background. But that doesn’t work for everyone, dependent upon their visual needs and preferences. The best we can do is follow some core principles to enable people to perceive our content:

  • Give focusable items some sort of visual indicator like a border or highlight (we’re doing it for our keyboard users anyway, right?)
  • Don’t use background images to deliver important content
  • Be considerate of foreground and background colors and how they can change drastically, dependent on the user’s system settings
  • Don’t rely on color alone to convey important information
  • Take advantage of the flexibility of SVGs, currentColor, and buttonText
  • Use high contrast icons, even without considering HCM
  • Add or remove backgrounds that affect HCM users
  • Use semantic markup to improve user experience
  • Always manually test with HCM yourself at the beginning of design and end of development

Firefox partially support HCM code, and Chrome doesn’t support it at all. Microsoft supports it though with:

@media (-ms-high-contrast: active) {}

For the most part, I was pleasantly surprised that I had no trouble seeing all components on my screen throughout the Windows system, as well as elements on familiar web pages that I frequent. There were a few exceptions, but at least I knew when things were present, even if I couldn’t see him. Not great for someone new to those sites, though.

Working in a CKEditor today, I discovered they had a neat trick for people using HCM. The editor icons were no longer icons; they were plain text. Kind of neat! Read further ahead to see more of my experience.

More on CKEditor

As I mentioned under “What I learned today”, my HCM encounter with plain text transformation from icons in a CKEditor was a surprise:

CKEditor toolbar with all tool buttons using plain text as labels.

I had to turn off HCM just to remember what I was used to looking at:

CKEditor toolbar with buttons using icons as labels.

Naturally that got my very curious. So, I visited the CKEditor website and dug into their documentation. Indeed, they have they’re own support for HCM. Some one put some thought into it! The same transformation did not happen as I wrote this post in WordPress with their TinyMCE editor.

Day 58: Identifying A11y Issues for Keyboard Users

Through studying WCAG (Guideline 2.1) and other web accessibility documentation and articles, I know that keyboard navigability and interoperability is important for a wide variety of users. Some important ideas to focus on when creating websites and keeping keyboard accessibility in mind:

  • Actionable elements (links, buttons, controls, etc.) must receive focusable via keyboard (WCAG SC 2.1.1);
  • All focusable elements need a visible border or highlight as a focus indicator (WCAG SC 2.4.7);
  • Logical (expected) tabbing order is set up appropriately (WCAG SC 2.4.3);
  • No keyboard traps, like poorly developed modals, have been created (WCAG SC 2.1.2).

The best way to test for keyboard accessibility? Test with a keyboard! It’s one of the easiest and cheapest ways to find out if you’re blocking someone from accessing your content. Critical (yet basic) keys to use:

  • Tab
  • Enter
  • Spacebar
  • Arrow keys
  • Esc

If any of those keys fail you when it comes to expected behavior for controls and navigation, it’s time to start digging into the code and figuring out what’s preventing that expected and conventional behavior.

That being said, I’ve started looking at section two of the WAS Body of Knowledge to fill in any gaps I have about identifying accessibility issues, starting with interoperability and compatibility issues. I’ve had a lot of practice checking for keyboard accessibility due to its simplicity, but I leave no stone unturned when it comes to studying for this exam and making sure I’m not missing more gaps in my web accessibility knowledge.

Things I accomplished

What I learned today

Today I didn’t learn much more on top of what I knew already, since I’ve had a lot of practice checking for keyboard accessibility due to its simplicity. However, I leave no stone unturned when it comes to studying for this exam and making sure I’m not missing more gaps in my web accessibility knowledge. Plus, I’m always eager to take the opportunity to advocate for keyboard testing as a reminder to designers and developers that not everyone uses a mouse and even well-written code can produce a different experience than initially expected.

One thing I did learn:

  • “A common problem for drop-downs used for navigation is that as soon as you arrow down, it automatically selects the first item in the list and goes to a new page.” (Keyboard Access and Visual Focus by W3C) I’ll have to watch out for this one I audit other sites, since I have not created a drop down with that type of behavior yet, myself.

Day 56: Practice with Narrator

Back to exploring more assistive tech, specifically screen readers. Today I experimented for the first time with Narrator, the built-in screen reader for Windows. I was a bit apprehensive at first since it has not been on my priority list to learn, knowing that only a mere 0.3% of desktop screen reader users actually use Narrator, according to WebAIM’s latest Screen Reader Use Survey. However, it’s free and built-in for Windows users (and it’s mentioned in the WAS Body of Knowledge study material), so I’m giving it a chance.

Things I accomplished

What I learned today

  • Turn on Narrator with shortcut keys: Windows (logo) key + Ctrl + Enter.
  • Narrator was finicky with Firefox, my preferred browser, but Edge is recommended as the best web browser when using this screen reader.
  • Narrator has a Developer View (Caps Lock + Shift + F12), which masks the screen, highlighting only the objects and text exposed to Narrator.
  • By default, Narrator presents an outline around the parts of the webpage that it is reading aloud. I found this handy to keep up with where it was at.
  • It has touch gestures. I suppose that makes sense when not all Windows computers are only desktop computers.
  • Accessible documents are important. (I knew this already) I was able to easily navigate between tables on the Deque PDF cheatsheet with the T key because they made it with accessibility in mind.

There is still so much to learn! Jumping between screen reader programs leaves my head spinning with all the shortcut keys I’d need to know. I’ll come back to this screen reader at some point because one hour of use is not enough to get fully comfortable with it. I also need to expand upon my cheatsheet to include more commands/tasks. Currently, it’s just a quick guide to the most frequent tasks I’ve needed.

An Aside: Fun A11y Resource

 

Day 55: Users with Auditory Disabilities

Auditory disabilities range from different levels of hearing difficulties to deafness, and may even include deaf-blindness. Being inclusive of this group seems fairly straightforward and easy (albeit captioning may require some budgeting).

Things I accomplished

What I learned today

Users that are deaf from birth may have sign language as their first language. Text information on websites can be their second or third language. Icons, illustrations, and images can help enhance clarity of information provided on a website.

In order to include people with auditory disabilities, web designers and developers need to review the WCAG perceivable principle. Effective strategies of accommodation for the hearing impaired include:

  • Providing transcripts and captions alongside any content that has audio;
  • Creating media players that can display captions and offer options to adjust text size and color of those captions;
  • Providing options to stop, pause, and adjust volume of audio content within the customized media player;
  • Posting high-quality foreground audio that is clearly distinguishable from background noise; and
  • Writing text in simple, clear language.

Offering sign language video as an alternative can be a nice-to-have (WCAG SC 1.2.6, Level AAA), but it isn’t always the right solution for every person with a hearing impairment. Though deaf culture is a thing, designers should never assume that every deaf person knows sign language. Additionally, it can be hard to clearly see sign language provided via web video.

It is controversial to use the word disabled in conjunction with a deaf person. Many within that community don’t consider themselves disabled due to the fact that they are thinking and capable people.

 

Day 54: Users with Motoric Disabilities

More on people with various disabilities. Today’s exploration led me to learn more about people with different motor disabilities. This group may include people with cerebral palsy, multiple sclerosis, quadriplegia, and arthritis.

Things I accomplished

What I learned today

When considering people with motor disabilities, web designers and developers should hold fast to WCAG’s operable principle. Specific important concepts includes creating a usable interface that:

  • is keyboard navigable (this also benefits voice activated software)
  • tolerates user error
  • provides alternative navigation methods to skip over lists of links, repetitive sections, and lengthy text
  • sets important stuff above the fold
  • offers autocomplete, autofill, or autosave
  • enables extended time limits
  • manages off-screen items appropriately (display:none, visibility:hidden when out of view)
  • provides clear focus outlines
  • provides large target (clickable) areas (buttons, links, controls)

There are one-handed keyboards for people with the use of only one hand. Other assistive technologies that can be used by those with more severe paralysis include head wands, mouth sticks, voice recognition software, switch access, and eye-tracking.

 

Day 50: Refreshable Braille Displays

Today marks my halfway point in learning. 50 days down (total of 72 hours study time), 50 more to go! So far, I’ve managed to cover swaths of WCAG, ARIA, and ATAG documentation. Additionally, I’ve learned about JavaScript techniques to better support screen readers when it comes to custom widgets. During this time, I’ve also managed to experiment with some of the popular screen readers (VoiceOver, NVDA, and TalkBack).

On that note, I’m curious about braille output. I’m very familiar and comfortable with speech output from screen readers, but am less so with refreshable braille displays. Unfortunately, I don’t currently have access to a refreshable braille display (not that I could read it, even if I did), but that won’t stop me from learning about them online.

Things I accomplished

Watched on YouTube:

Read:

What I learned today

  • Refreshable braille displays come in many shapes and sizes, some with input options, too!
  • Refreshable braille displays can be hooked up wirelessly, like to an iPad, but not all computers/devices support wireless connection.
  • One-line braille displays can greatly limit how information is conveyed to a user; spatial information given in tables and charts can be especially challenging.
  • Android support for Braille is BrailleBack.
  • Braille comes in two forms: contracted and uncontracted. Contracted is more advanced and allows for shorthand, of sorts, like abbreviations and contractions.