Day 62: Identifying A11y Issues for Voice Input Users

Speech input software is an assistive technology and strategy that people use when they have difficulty using a keyboard or mouse. This may include people with motor, visual, or cognitive disabilities. In the 21st century, it’s an excellent alternative for people in all walks of life.

Things I accomplished



What I learned today

Windows 10 has built-in speech recognition?? It sounds like a combination of Cortana and Speech Recognition could be a cheap alternative to Dragon, but I’d need to experiment a bit with both to compare.

Apple has a Dictation feature. So, somewhat like Windows, a combination of Siri and Dictation could be used. I’ve avoided setting up dictation just because of the privacy flag that pops up when it asks permission to connect to an Apple server and learn from your voice over the Internet. Maybe I’m just paranoid and they all actually work that way?

Dragon offers some ARIA support, but it appears to be limited, and should be tested if relying on aria-label, specific roles, etc.

Love this catchphrase from the Web accessibility perspectives video:

“Web accessibility: essential for some, useful for all.”

Challenges that people who use speech recognition software face on the web:

  • carousels that move without a pause button
  • invisible focus indicators
  • mismatched visual order and tab order
  • mismatched linked image with text and alternative text
  • duplicate link text (e.g. Read More) that leads to different places
  • form controls without labels
  • hover only menus (MouseGrid can struggle accessing these)
  • small click targets
  • clickable items that don’t look clickable
  • too many links

Designers and developers should focus on WCAG’s Operable principle. In particular, Navigable guideline’s success criteria would apply here. If many of those success criteria are met with other users in mind, it will definitely be beneficial to speech recognition users, too.

In the past, I haven’t personally been interested in software, like Dragon, yet looking from an accessibility point of view, I’m ready to start testing with speech input technology to better understand how it works and affects people who rely on it when interacting with the web.

Day 61: Identifying A11y Issues for Users Who Magnify Their Screen

Things I accomplished



What I learned today

Windows has a built-in magnifier, as does Apple, but it often isn’t always strong enough or robust to help everyone with low vision. Alternative magnification software includes:

For mobile, I knew Apple phones and tablets had zoom built in, but Android devices have magnification built-in, too.

Apple Watch has a zoom feature (YouTube)!

Trying to learn all things accessibility, I’m constantly having to rediscover keyboard shortcuts:

  • Windows Magnifier: Windows + +
  • Apple Zoom: Option + Cmd + 8


Never assume that two low vision people are alike. Everyone with low vision has their underlying reasons of why they struggle with that disability. The point is to add flexibility for their particular experience with low vision and the strategies they use to access content and services on the web.

Challenges people who enable magnification may encounter on the web:

  • text as images become blurry and pixelated when magnified
  • unclearly marked sections/landmarks can make navigation slow when a user only see a small portion of the screen and they’re trying to differentiate navigation from main content from a footer
  • headings that look too much like paragraph text
  • unclear link text
  • scrolling, flashing, or moving objects (carousels, I’m glaring at you again)
  • drawn out content that doesn’t provide a quick intro or conclusion at the beginning
  • horizontal scrolling
  • page content referred to by it’s position (e.g. “to the right”)
  • meaning is conveyed by color alone
  • forms with fields and labels that are not close together or positioned on one line together

WebAIM’s advice:

“The general rule when designing for low vision is to make everything configurable. If the text is real text, users can enlarge it, change its color, and change the background color. If the layout is in percentages, the screen can be widened or narrowed to meet the user’s needs. Configurability is the key.”

WCAG supports people with low vision through it’s perceivable principle. 15 reasons to consider designing to include low vision users who magnify their screen:

  • 1.1.1 Non-text content (A)
  • 1.3.1 Info and relationships (A)
  • 1.3.3 Sensory characteristics (A)
  • 1.3.4 Orientation (AA)
  • 1.4.1 Use of color (A)
  • 1.4.3 Contrast (minimal) (AA)
  • 1.4.4 Resize text (AA)
  • 1.4.5 Images of text (AA)
  • 1.4.6 Contrast (enhanced) (AAA)
  • 1.4.8 Visual presentation (AAA)
  • 1.4.9 Images of text (no exception) (AAA)
  • 1.4.10 Reflow (AA)
  • 1.4.11 Non-text contrast (AA)
  • 1.4.12 Text spacing (AA)
  • 1.4.13 Content on hover or focus (AA)

Day 60: Identifying A11y Issues for Touch Screen Users

Touch screen accessibility is something I haven’t spent much time thinking on. Maybe not ever, come to think of it. I know that touch screens usually require gestures for interaction (i.e. swiping or tapping), but I hadn’t thought about how that might affect others. I even know about alternative ways to interact with touch screens, like with switch devices, but I’m not all that familiar with the challenges it can bring.

So, today was a significant learning day to figure out how people could potentially be excluded from phone or tablet design or the apps that live on those devices.

Things I accomplished



What I learned today

The first thing that comes to mind when I hear touch screen accessibility are the disadvantages of touch screens for specific disabilities. However, touch screens are actually quite beneficial for other disabilities that make it hard to use a mouse or keyboard.

Things to consider for touch screen accessibility for web design:

  • Sufficiently large touch target sizes, which benefits users with motoric and visual impairments, as well as everyone else [WCAG SC 2.5.5]
  • Simplified layout, generous white space, and intuitive design, which can benefits everyone
  • Allowing different screen orientations (portrait or landscape) to give the user a choice [WCAG SC 1.3.4]
  • Extra considerations for when the screen reader is turned on, since some gestures will change once activated
  • Alternatives for complex gestures must offered [WCAG SC 2.5.1]
  • Custom gesture events must have an alternative method for activation (e.g. clicking a button) [WCAG SC 2.5.6]
  • Motion-activated events (e.g. shaking the device) must have an alternative method for activation (e.g. clicking a button) [WCAG SC 2.5.4]

In short, don’t take away the choice from users and don’t make assumptions of how they use their device. Offer them lots of choices (alternatives for interaction) so they can continue to do what they do.

This list reminds me that not all touch screen device owners use the touch screen. Other interaction methods include voice commands, Bluetooth keyboard, and switch devices. Not all who use the touch screen will touch the device the same way I can. Additionally, not all touch screens receive the same type of touch when it comes to how the screen responds to touch (i.e. hand versus gloved hand or stylus).

Day 59: Identifying A11y Issues for High Contrast Mode Users

Ok, ok… so High Contrast Mode (HCM) isn’t explicitly listed in the WAS Body of Knowledge under the current section I’m in, but it’s an area of testing that is significant to me. I’m interested in seeing how my content looks when it undergoes transformation created by the system. And I wanted to take time to think about what other’s using it may experience and strategies they may have to use when something can’t be seen after that transformation.

Additionally, it’s such a cheap and easy way to test that I like to encourage other designers and developers to use it as well. It is not insignificant to the people who use your sites and might be using HCM or inverted colors.

One last thing I’d like to mention before sharing what I did and learned… I actually like using HCM on Windows. It has improved greatly over the past few years (I didn’t enjoy it when I first tried it). Oddly enough, a Dark Mode feature has been popping up more and more across applications and systems, so that has provided me with an alternative, too. I don’t use HCM on a regular basis, but I’ve used it for relief before in a bright office with bright windows and three bright monitors glaring around me. I experience light sensitivity, so it provides me with a solution to continue working at my computer without contributing to headaches.

Things I accomplished today

What I learned today

Something I always have to look up when I want to test with HCM is the keyboard shortcut to activate it: Left Shift + Left Alt + Print Screen. The same key combination will exit HCM.

Not all of my Windows custom colors come back to life after using High Contrast Mode. Weird.

Invert colors, which is completely different experience to me, on macOS can be activated by Control + Option + Command + 8.

HCM offers some personal customization of colors. I played with it some and settled on the High Contrast #1 preset that offered black background and yellow text. Then I tweaked hyperlink colors to stand out more in my browser (Firefox).

HCM benefits people with visual and cognitive disabilities, as well as people with environmental impairments. Some examples:

  • low vision
  • Irlen syndrome
  • bright environments like outdoors
  • low-light environments

Not surprisingly, WCAG comes into play here: SC 1.4.8 Visual Presentation. Yes, that’s inside the Perceivable principle!

The last point brought home the issue that we can never assume how someone else’s system is set up. Default HCM offers white text on black background. But that doesn’t work for everyone, dependent upon their visual needs and preferences. The best we can do is follow some core principles to enable people to perceive our content:

  • Give focusable items some sort of visual indicator like a border or highlight (we’re doing it for our keyboard users anyway, right?)
  • Don’t use background images to deliver important content
  • Be considerate of foreground and background colors and how they can change drastically, dependent on the user’s system settings
  • Don’t rely on color alone to convey important information
  • Take advantage of the flexibility of SVGs, currentColor, and buttonText
  • Use high contrast icons, even without considering HCM
  • Add or remove backgrounds that affect HCM users
  • Use semantic markup to improve user experience
  • Always manually test with HCM yourself at the beginning of design and end of development

Firefox partially support HCM code, and Chrome doesn’t support it at all. Microsoft supports it though with:

@media (-ms-high-contrast: active) {}

For the most part, I was pleasantly surprised that I had no trouble seeing all components on my screen throughout the Windows system, as well as elements on familiar web pages that I frequent. There were a few exceptions, but at least I knew when things were present, even if I couldn’t see him. Not great for someone new to those sites, though.

Working in a CKEditor today, I discovered they had a neat trick for people using HCM. The editor icons were no longer icons; they were plain text. Kind of neat! Read further ahead to see more of my experience.

More on CKEditor

As I mentioned under “What I learned today”, my HCM encounter with plain text transformation from icons in a CKEditor was a surprise:

CKEditor toolbar with all tool buttons using plain text as labels.

I had to turn off HCM just to remember what I was used to looking at:

CKEditor toolbar with buttons using icons as labels.

Naturally that got my very curious. So, I visited the CKEditor website and dug into their documentation. Indeed, they have they’re own support for HCM. Some one put some thought into it! The same transformation did not happen as I wrote this post in WordPress with their TinyMCE editor.

Day 58: Identifying A11y Issues for Keyboard Users

Through studying WCAG (Guideline 2.1) and other web accessibility documentation and articles, I know that keyboard navigability and interoperability is important for a wide variety of users. Some important ideas to focus on when creating websites and keeping keyboard accessibility in mind:

  • Actionable elements (links, buttons, controls, etc.) must receive focusable via keyboard (WCAG SC 2.1.1);
  • All focusable elements need a visible border or highlight as a focus indicator (WCAG SC 2.4.7);
  • Logical (expected) tabbing order is set up appropriately (WCAG SC 2.4.3);
  • No keyboard traps, like poorly developed modals, have been created (WCAG SC 2.1.2).

The best way to test for keyboard accessibility? Test with a keyboard! It’s one of the easiest and cheapest ways to find out if you’re blocking someone from accessing your content. Critical (yet basic) keys to use:

  • Tab
  • Enter
  • Spacebar
  • Arrow keys
  • Esc

If any of those keys fail you when it comes to expected behavior for controls and navigation, it’s time to start digging into the code and figuring out what’s preventing that expected and conventional behavior.

That being said, I’ve started looking at section two of the WAS Body of Knowledge to fill in any gaps I have about identifying accessibility issues, starting with interoperability and compatibility issues. I’ve had a lot of practice checking for keyboard accessibility due to its simplicity, but I leave no stone unturned when it comes to studying for this exam and making sure I’m not missing more gaps in my web accessibility knowledge.

Things I accomplished

What I learned today

Today I didn’t learn much more on top of what I knew already, since I’ve had a lot of practice checking for keyboard accessibility due to its simplicity. However, I leave no stone unturned when it comes to studying for this exam and making sure I’m not missing more gaps in my web accessibility knowledge. Plus, I’m always eager to take the opportunity to advocate for keyboard testing as a reminder to designers and developers that not everyone uses a mouse and even well-written code can produce a different experience than initially expected.

One thing I did learn:

  • “A common problem for drop-downs used for navigation is that as soon as you arrow down, it automatically selects the first item in the list and goes to a new page.” (Keyboard Access and Visual Focus by W3C) I’ll have to watch out for this one I audit other sites, since I have not created a drop down with that type of behavior yet, myself.

Day 57: Comparing AT and Strategies of People with Disabilities

Before I move into the “Identifying accessibility issues/problems” section of the WAS Body of Knowledge, I needed to recap for myself what I learned this week about people with different types of disabilities, the barriers they run into, strategies and assistive tech they use to overcome barriers, and the WCAG principles that benefit each one. I ended up with an imperfect table visualization as I tried organizing my thoughts on what I’d learned this past week, as well as the entirety of the past 56 days.

Thing I accomplished

What I learned today

It’s no easy task trying to visualize comparisons (in table format) of disability types and strategies used to interact with the web. For one, pigeon-holing any disability is tough, due to the nature of variety within any disability type. And organizing that information for me to better understand what strategies and assistive tech may be used in different instances really challenged me in considering what the best way was to approach this visualization. My personal cheat sheet as a 2D table doesn’t do the information justice. There are experts out there who have likely wrestled with this themselves.

Assigning the WCAG principles to each instance helped me really think about why these principles were developed and how invaluable they are to many people in a very real and personal way.

Day 56: Practice with Narrator

Back to exploring more assistive tech, specifically screen readers. Today I experimented for the first time with Narrator, the built-in screen reader for Windows. I was a bit apprehensive at first since it has not been on my priority list to learn, knowing that only a mere 0.3% of desktop screen reader users actually use Narrator, according to WebAIM’s latest Screen Reader Use Survey. However, it’s free and built-in for Windows users (and it’s mentioned in the WAS Body of Knowledge study material), so I’m giving it a chance.

Things I accomplished

What I learned today

  • Turn on Narrator with shortcut keys: Windows (logo) key + Ctrl + Enter.
  • Narrator was finicky with Firefox, my preferred browser, but Edge is recommended as the best web browser when using this screen reader.
  • Narrator has a Developer View (Caps Lock + Shift + F12), which masks the screen, highlighting only the objects and text exposed to Narrator.
  • By default, Narrator presents an outline around the parts of the webpage that it is reading aloud. I found this handy to keep up with where it was at.
  • It has touch gestures. I suppose that makes sense when not all Windows computers are only desktop computers.
  • Accessible documents are important. (I knew this already) I was able to easily navigate between tables on the Deque PDF cheatsheet with the T key because they made it with accessibility in mind.

There is still so much to learn! Jumping between screen reader programs leaves my head spinning with all the shortcut keys I’d need to know. I’ll come back to this screen reader at some point because one hour of use is not enough to get fully comfortable with it. I also need to expand upon my cheatsheet to include more commands/tasks. Currently, it’s just a quick guide to the most frequent tasks I’ve needed.

An Aside: Fun A11y Resource


Day 55: Users with Auditory Disabilities

Auditory disabilities range from different levels of hearing difficulties to deafness, and may even include deaf-blindness. Being inclusive of this group seems fairly straightforward and easy (albeit captioning may require some budgeting).

Things I accomplished

What I learned today

Users that are deaf from birth may have sign language as their first language. Text information on websites can be their second or third language. Icons, illustrations, and images can help enhance clarity of information provided on a website.

In order to include people with auditory disabilities, web designers and developers need to review the WCAG perceivable principle. Effective strategies of accommodation for the hearing impaired include:

  • Providing transcripts and captions alongside any content that has audio;
  • Creating media players that can display captions and offer options to adjust text size and color of those captions;
  • Providing options to stop, pause, and adjust volume of audio content within the customized media player;
  • Posting high-quality foreground audio that is clearly distinguishable from background noise; and
  • Writing text in simple, clear language.

Offering sign language video as an alternative can be a nice-to-have (WCAG SC 1.2.6, Level AAA), but it isn’t always the right solution for every person with a hearing impairment. Though deaf culture is a thing, designers should never assume that every deaf person knows sign language. Additionally, it can be hard to clearly see sign language provided via web video.

It is controversial to use the word disabled in conjunction with a deaf person. Many within that community don’t consider themselves disabled due to the fact that they are thinking and capable people.


Day 54: Users with Motoric Disabilities

More on people with various disabilities. Today’s exploration led me to learn more about people with different motor disabilities. This group may include people with cerebral palsy, multiple sclerosis, quadriplegia, and arthritis.

Things I accomplished

What I learned today

When considering people with motor disabilities, web designers and developers should hold fast to WCAG’s operable principle. Specific important concepts includes creating a usable interface that:

  • is keyboard navigable (this also benefits voice activated software)
  • tolerates user error
  • provides alternative navigation methods to skip over lists of links, repetitive sections, and lengthy text
  • sets important stuff above the fold
  • offers autocomplete, autofill, or autosave
  • enables extended time limits
  • manages off-screen items appropriately (display:none, visibility:hidden when out of view)
  • provides clear focus outlines
  • provides large target (clickable) areas (buttons, links, controls)

There are one-handed keyboards for people with the use of only one hand. Other assistive technologies that can be used by those with more severe paralysis include head wands, mouth sticks, voice recognition software, switch access, and eye-tracking.


Day 53: Users with Cognitive Disabilities

Users with cognitive disabilities can include a wide scope of people, including autism, Down syndrome, Alzheimer’s, and ADHD. Persons in this category may have trouble concentrating, experience a neurophysiological disability, or struggles with a level of intellectual disability. People with cognitive disabilities may use some of the same strategies that people with reading difficulties use in order to navigate the web. Additionally, some people in this group may use assistive technology that assists with writing on the web.

Things I accomplished

What I learned today

Things to consider as a web developer/designer when trying to include this category of disability:

  • Clean and simple layout / presentation is of utmost importance.
  • Images and multimedia should supplement text, when possible.
  • Provide clear and consistent labels.
  • Utilize convention with predictable interactions.
  • Offer options to suppress distractions, like carousels, animation, and media.

This population is larger than those with all other physical and sensory disabilities combined, and yet it’s harder to use a universal solution for everyone within this group (due to the scope of abilities categorized within this group).

Memory and organization are two big challenges that this group has to overcome on a daily basis.


Day 52: Users with Reading Difficulties

Screen readers. They’re just for the blind and visually impaired, right? Wrong! There’s a whole other class of screen readers and screen reader users that often get little recognition. I’m talking about people who have difficulty reading. This group contains a wide spectrum that may include, but is not limited to, people with ADHD, dyslexia, Irlen syndrome, or memory loss. And I’m accusing myself of not acknowledging this group when it comes to envisioning people who use screen readers (text to speech technology).

Things I accomplished



What I learned today

There is a stark difference between screen reader use by people with reading difficulties as compared to the blind. For one, the first group doesn’t need all things read. They mostly need assistance with having some text read aloud, rather than having everything read aloud along with additional navigational aids. A couple of screen readers that benefit this group:

Another strategy that people with reading difficulties use to access content on the web is to change styles on a web page or document. This includes customizing font size, color, and family. Using true text, rather than text inside of images, makes the reading experience for this group of people more enjoyable and inclusive.

Examples of barriers that may stand between people with reading difficulties and the web content they pursue:

  • Complex navigation mechanisms and page layouts.
  • Complex sentences and unusual words.
  • Long passages of text without images, graphs, or other illustrations.
  • Moving, blinking, or flickering content, and background audio that cannot be turned off.
  • Web browsers and media players that do not provide the ability to suppress animations and audio.
  • Visual page designs that cannot be adapted using web browser controls or custom style sheets.

Day 51: Users with Low Vision

Continuing on through the WAS Body of Knowledge, I’m currently working through concepts that involve building websites that accommodate strategies used by people with disabilities. Today I focused on those with low vision. I’m personally familiar with the group the most, and yet the strategies that people with low vision use to access web content can vary greatly. So, I consider there is still room for me to learn here.

Things I accomplished



What I learned today

There are several low vision users that use screen readers, but often times they make the most out of the vision they do have by:

  • Using text enlargement and zoom in the browser
  • Changing colors, contrast, or fonts in the browser or operating system
  • Using magnifying tools
  • Using keyboard commands in conjunction with mouse to speed up interaction

ZoomText Magnifier/Reader is a Freedom Scientific product (the same company that produces JAWS). It appears to be a very robust program, offering enhancements to increase visibility of content, cursor, and focus. Additionally, it has a screen reader function, and has a toolbar that lets the user search and find by text, headings, lists, tables, etc (unified finder). ZoomText and JAWS can work together.

“VoiceOver can describe images to you, such as telling you if a photo features a tree, a dog, or four smiling faces. It can also read aloud text in an image — whether it’s a snapshot of a receipt or a magazine article — even if it hasn’t been annotated.” WOW. I tried this on my iPhone and verified that it could describe a picture of my son outside in the snow. My mind was BLOWN. This technology makes me very happy for one of my blind friends!

iOS magnification can jump from 100% to 1500%. Android phones have magnification, too.

High contrast text, color inversion, and color correction are available on Android 5.0+, however, they are still considered experimental features. That’s interesting, considering these are solid accessibility options on iPhones.


Day 50: Refreshable Braille Displays

Today marks my halfway point in learning. 50 days down (total of 72 hours study time), 50 more to go! So far, I’ve managed to cover swaths of WCAG, ARIA, and ATAG documentation. Additionally, I’ve learned about JavaScript techniques to better support screen readers when it comes to custom widgets. During this time, I’ve also managed to experiment with some of the popular screen readers (VoiceOver, NVDA, and TalkBack).

On that note, I’m curious about braille output. I’m very familiar and comfortable with speech output from screen readers, but am less so with refreshable braille displays. Unfortunately, I don’t currently have access to a refreshable braille display (not that I could read it, even if I did), but that won’t stop me from learning about them online.

Things I accomplished

Watched on YouTube:


What I learned today

  • Refreshable braille displays come in many shapes and sizes, some with input options, too!
  • Refreshable braille displays can be hooked up wirelessly, like to an iPad, but not all computers/devices support wireless connection.
  • One-line braille displays can greatly limit how information is conveyed to a user; spatial information given in tables and charts can be especially challenging.
  • Android support for Braille is BrailleBack.
  • Braille comes in two forms: contracted and uncontracted. Contracted is more advanced and allows for shorthand, of sorts, like abbreviations and contractions.

Day 49: Modal Completion

Back tracking to Day 41, I returned to the CodePen project I’d started in order to complete making it accessible. Due to unforeseen popularity of this pen (before I’d even completed it), I felt I needed to get all the pieces right.

Thing I accomplished

My pen now meets the requirements that need to happen for a dialog/modal to be accessible. I even ran a quick test with VoiceOver on Safari, as well as keyboard navigability.

What I learned today

The two things I hadn’t completed earlier involved:

  • not allowing the user to Tab outside of the modal, and
  • return focus back to the button that triggered the modal, once the modal is closed.

It was the Tab trap that was getting me. I knew the Tab key was associated with key code 9, but it took me a bit to realize I needed to attach an event listener of ‘keydown’ to the close button. That solved it!

As for returning focus, it was as simple as adding the focus method onto open button within my closeModal function. It worked!


Day 48: Accessible Single Page Applications

Still looking over the WAS Body of Knowledge section about creating accessible single-page applications (SPAs), I took my research a step further beyond the items they listed (aria-live and focus management), and started exploring on my own about what else I needed to pay attention to when it came to making SPAs accessible.

Things I accomplished


What I learned today

  • SPAs are not exempt from best practices used on static webpages or full applications. Document structure, native elements, and keyboard navigability are key to creating accessible SPAs.
  • The page title should change with each new view.
  • tabindex=”-1″ lets scripts bring focus to an element, but not let a user tab to it.