Types of Disabilities, Part 2

In Types of Disabilities, Part 1, I learned and shared about visual, auditory, and mobility disabilities. In this post I’ll cover the rest of the list, diving deeper into the the categories covered by the Deque material and CPACC Body of Knowledge: cognitive, speech, seizure, psychological, and compound.

Cognitive

Cognitive disabilities are the most common type of disability due to its broad definitions, which include impairments in thinking, language, learning, perception, attention, memory, and problem solving.

Types of cognitive disabilities:

  • Neurodevelopmental disorders (autism, Down’s syndrome)
  • Memory impairments (Alzheimer’s, dementia)
  • Neurodegenerative disorders
  • Brain injury impairments (injury, tumors)
  • Learning disabilities (dyslexia, dysgraphia, dyscalculia, aphasia)

Causes of cognitive disabilities:

  • congenital
  • developmental
  • traumatic injury
  • infections / disease
  • chemical imbalances
  • aging

Suggested AT or strategies for improving focus, leveraging learning styles, or accommodating short-term memory:

  • screen magnifier
  • easily editable/customizable content
  • customizable fonts and colors
  • screen reader or speak aloud
  • interactive transcripts
  • blocking animations or flashing elements
  • break up long tasks by saving work and doing shorter tasks

Additional reading about cognitive disabilities:

Digital Environments
Challenge Solution
Complex designs
  • Designers can create simple, predictable, organized designs
Complex tasks
  • Simplify steps or user components
Technical problems and errors
  • Alert about errors
  • Provide clear solutions
Physical and Digital Environments
Challenge Solution
Text-based information
  • Supplement with images and visuals
  • Use simple and easy-to-understand language

Reading (dyslexia, dysgraphia)

Digital Environment
Challenge Solution
Floating words
  • Font for Dyslexia
  • Additional time to complete tasks
Letter confusion, such as p b d q
  • Font, contrast, style customization
  • Additional time to complete tasks
Timed sessions
  • Time extensions or saved work during timeouts
  • Screen reader to listen along with text or view highlighted words or phrases
  • Visible focus indicators to keep track of their position on the page
  • Applications or dictionaries that present words with pictures
  • Additional time to complete tasks
Deciphering the way content is presented
  • Custom style sheet
CAPTCHA
  • Alternate type of security feature or problem to solve
Difficulty processing visual content
  • Screen reader to listen to content
  • Additional time to complete tasks
Difficulty accurately spelling words
  • Spelling and grammar checkers

Additional Reading:

Math (dyscalculia)

People with dyscalculia have difficulty understanding or using math based on how their brain functions, as opposed to experiencing a psychologically-induced fear.

Digital Environments
Challenge Solution
Distinguishing right from left in graphic images
  • Data table or text description
  • Additional time to complete tasks
Graphs, figures and diagrams (difficult to copy)
  • Text-to-speech to listen to problems
  • Additional time to complete tasks
Calculations
  • Reference sheet with common equations, as an accommodation
  • Onscreen calculator, as an accommodation
  • Additional time to complete tasks

Speech

A person with a speech disability may have trouble articulating words or producing speech sounds. Often times people with speech disabilities will use unaided or aided Augmentative and Alternative Communication (AAC) to give them a voice. A person with a speech disability may or may not have additional disabilities. In that case, the same design considerations for blindness, low vision, motor disabilities, auditory disabilities, and cognitive disabilities may need to be used.

Causes of speech disabilities:

  • genetics
  • learning disabilities
  • auditory disabilities
  • motoric disabilities
  • autism
  • traumatic brain injury
  • stroke
  • cancer

Some types of speech disabilities include:

  • stuttering
  • cluttering
  • apraxia
  • dysarthria
  • speech sound disorder (articulation, phonetic)
  • muteness

Suggested AT or strategies for people with speech disabilities:

  • touch screens
  • alternative keyboards
  • single switch devices
  • eye-tracking technologies
  • speech-generating software
  • word prediction software
  • symbol boards and languages
  • symbol software
  • translation software
Digital Environments
Challenge Solution
Live chats / webinars / teleconferences (voice-based communication)
  • offer text-based chat
Additional disability (low vision, hard of hearing, etc.)
  • create interoperable content for optimal accessibility;
  • captions & transcripts;
  • keyboard operable;
  • multiple formats of content
Physical Environments
Challenge Solution
May have mobility issues
  • same solutions for motoric disabilities
Additional disability (low vision, hard of hearing)
  • create interoperable content for optimal accessibility
  • captions & transcripts
  • keyboard operable
  • multiple formats of content
General
Challenge Solution
producing speech sounds
  • low-tech AAC (boards, gestures);
  • high-tech AAC (computer-generated voice);
  • patience

Additional resources:

Seizure Disorders

Seizures are electrical impulses in the brain that can interfere with information processing or create involuntary muscle movement. Photo-epileptic is one type of seizure.

Causes of seizures:

  • brain injury
  • dehydration
  • sleep deprivation
  • infections
  • fevers
  • drug overdoses or withdrawals
  • flashing lights (photo-epileptic)
Digital Environment
Challenge Solution
intense flashing light, blinking, or flickering
  • eliminate or reduce speed/intensity of flashing/animation

Psychological/Psychiatric

Psychological disorders encompass a wide range of emotional and mental conditions. When the condition impacts daily life activities, it becomes a disability. Some causes of mental illness may include:

  • trauma
  • chemical imbalances
  • genetics
  • social factors

Anxiety

Anxiety disorders are the most common of psychological disorders. This disorder manifests itself as fear and worry about situations or objects. A few anxiety disorders are:

  • panic disorder
  • phobias
  • post-traumatic stress disorder (PTSD)
  • obsessive-compulsive disorder (OCD)

Mood

Mood disorders create mood fluctuations in that person. Some subcategories of mood (affective) disorders:

  • depression
  • bipolar
  • seasonal affective disorder (SAD)

Schizophrenia

Schizophrenia is broken into two groups: positive (hallucinations and delusions) and negative (lack of motivation, dreary mood, isolating). It’s theorized that this disorder is caused by either genetics, chemical imbalance, or environmental factors. Sometimes people with this disorder can struggle with:

  • expressing themselves
  • attention and memory deficits
  • controlling their movements.

It’s estimated that 2.4 million (1.1%) Americans have schizophrenia. It’s also estimated that 4.9% of people with schizophrenia commit suicide with the average age of the life lost being 28.5 years old.

Additional Resources about Schizophrenia:

Other Psychological Disorders

Attention Deficit Hyperactivity Disorder (ADHD)

ADHD is categorized as a behavioral disorder. It’s broken up into 3 subcategories: inattention, hyperactivity, and impulsivity.

Personality Disorders

Personality disorders are when people’s behavior deviate from cultural expectations. Two common personality disorders are antisocial personality disorder and borderline personality disorder.

Eating Disorders

Eating disorders cause concern over food and weight. The 3 most common eating disorders are anorexia nervosa, bulimia nervosa, and compulsive (binge) eating.

Multiple/Compound

It’s possible to have more than one disability. People with multiple disabilities may experience a combination of disabilities (of different degrees) that affect their speech, motor, visual, or hearing abilities. More inclusive accommodations help anyone with multiple disabilities to live life more independently.

In Conclusion

Without understanding the people within this multifaceted culture of disability, we can’t create solutions to the challenges they face. Read my other posts from my WAS journey that go further into keeping perspective about the people we are trying to include and serve:

Day 99: Semantic Elements and Their Quirks

Today I worked on finishing a Deque course about semantic code. All my time got wrapped up in the fascinating aspects of which HTML is read aloud and easily navigable, and which elements are ignored by screen readers.

Things I accomplished

What I reviewed today

  • Semantic structures that screen readers (and sometimes everyone):
    • tables
    • lists
    • iframes
    • elements announced & unannounced
    • parsing & validity
  • Navigation keyboard shortcuts for screen readers;

What I learned from it

I’ve been mixing up the purpose to the caption element and the summary attribute for tables. Caption is the accessible name of the table, so it shows up in a list of tables provided by a screen reader. The summary attribute was deprecated in HTML5. Caption should be short, even when including a brief summary. Summary replacements include:

  • putting the table in a figure element, and using figcaption with table aria-labelledby to associate the table with the summary
  • adding an id to a separate paragraph and adding aria-describedby to the table element to point to that p id.

When using iframes, include a title attribute, and ensure the embedded page/content has a title element. Screen readers like JAWS vary in behavior as to which one they read. Also, as a note to myself, I need to start defining the type of content within tthe iframe title attribute, like starting the title with “Video”, so it’s clear what they are accessing.

HTML elements that we can’t rely on screen readers to read aloud, therefore, should provide additional cues for important information:

  • strong
  • em
  • q
  • code
  • pre
  • del
  • ins
  • mark

These have given me a lot to think about and stresses the importance of testing my sites on a few different screen readers and platforms.

Wrapping a code element with a pre element is appropriate, and helps the visual presentation of code blocks.

Day 78: Learning about Orca

Orca is an open source screen reader for Linux. This is my first time to read about it. Hopefully, I’ll have a chance to actually experiment using it. However, I’ll need to set up a Linux distribution that works with it first.

Things I accomplished

  • Attempted to install Orca on my netbook (Lubuntu), and then on my Raspberry Pi (Raspbian). Both failed attempts (today, anyway).
  • Read through a lot of Orca documentation.
  • Added keystrokes to my screen reader cheatsheet, and copied the spreadsheet over to my WAS cheatsheets on Google Sheets.

What I learned today

  • Orca can provide speech or braille output.
  • Orca is provided as a default screen reader for several Linux distributions, including Solaris, Fedora, and Ubuntu.
  • Orca provides a really cool feature called Where Am I that allows additional commands to inform the user about page title, link information (location, size), table details, and widget role and name.
  • Many of the navigation keystrokes are similar to other desktop screen reader commands.
  • Orca also has commands specific to dealing with Live Regions on webpages.
  • When the “Super” key is referenced, it’s talking about the Windows logo key.
  • Orca provides Gecko-specific navigation preferences. I wonder if it works best with Firefox?

Not only did I learn about Orca, but I also got sucked down the Linux rabbit hole in order to better grasp that OS, its distributions and desktop environments, and additional “universal access” for people with disabilities. However, that topic could take another week to work through.

Day 77: Experimenting with Window-Eyes

Window-Eyes is a screen reader that appears to have fallen out of the mainstream use. According to WebAIM’s latest Screen Reader Survey, 1.5% of their respondents reported that they use Window-Eyes. I experimented with it because it was listed as an example of an assistive technology to experiment and test out.

Things I accomplished

What I learned today

  • Window-Eyes works best with Internet Explorer.
  • Window-Eyes was folded into the AI Squared family, and there are instructions on how to migrate from Window-Eyes to JAWS (mp3).
  • Window-Eyes is free to download if you have a registered copy of Microsoft Office.
  • Most keystrokes are similar to other Windows screen readers, but uses Control or Insert keys as modifier keys.

Day 76: Screen Reader Keystroke Comparisons, Part 2

Continued work from Screen Reader Keystroke Comparisons, Part 1. Tomorrow I hope to dive into one of the other screen readers that I’m less familiar with (either Windows Eyes or Orca).

Thing I accomplished

  • Added VoiceOver (Mac), VoiceOver (iOS), and Talkback keystrokes/gestures to my (offline) comparisons spreadsheet.

What I learned today

  • For VoiceOver on Mac, Control + Option + Command + X navigates to the next list on a page.
  • Noticed for the first time, the Talkback cheatsheet for Android [PDF] devices recommends using the Firefox browser. I assumed Chrome or the proprietary browser on the device.
  • It’s kind of a nerdy fun to actually see the keystroke and gesture differences and similarities next to each other, which is helping me differentiate what works on what device.

Day 75: Screen Reader Keystroke Comparisons, Part 1

Sporadically, I’ve used some of my study time to test out using assistive technologies (AT) like screen readers, speech recognition, and high contrast mode. I’m circling back to AT because I’ve hit the section in the WAS Body of Knowledge that stresses testing with AT in order to better understand how people who use AT may experience your website. This will be a fun week for me because I enjoy trying out AT and broadening my perspective to how users encounter webpages.

Thing I accomplished

  • Added NVDA, JAWS, Narrator, and VoiceOver (Mac) keystrokes to a new (offline) comparisons spreadsheet I’ve started working on.

What I learned today

  • NVDA and JAWS have many similar keystrokes and shortcuts, although I’m not sure why NVDA uses “D” for going to the next region, when JAWS uses “R” which is easier to remember.
  • Oh! Deque has a cheatsheet for JAWS Keyboard Shortcuts for Word. I’ll have to take a closer look this week.
  • JAWS has several different cursors to toggle between, dependent on context.
  • Narrator has a specific mode for developers to use during testing.
  • Keyboard accessibility is not enabled by default on a Mac. Accessibility and screen reader test results will be inaccurate if you do not enable keyboard accessibility in the following two places:
    1. System Settings: Keyboard > Shortcuts > Full Keyboard Access > All controls
    2. Safari Settings: Advanced > Accessibility > Press Tab to highlight each item on a webpage.
  • Switching between Mac and Windows keystrokes just feels awkward. Imagine a screen reader user switching operating systems!

Recent interesting A11y articles

Day 65: Identifying A11y Issues for Switch Control Users

About a week ago I learned more about users with motoric disabilities, which is usually who I think of when it comes to switch device use. Today I wanted to focus more on what challenges switch device users may encounter when using websites. My study time ended up turning into review of some things I’d already learned, as well as discovering some new articles and videos about switch access.

I was not able to do any testing myself, since I don’t have any switch devices. On another day, when I’m feeling more adventurous, I’ll dedicate a study session to testing with a Bluetooth keyboard or Android device buttons to simulate the experience.

Things I accomplished

What I learned today

Apple’s Switch Control feature is built over the VoiceOver platform, so that’s how it knows to recognize things like buttons. Point mode allows access to otherwise inaccessible apps by scanning horizontally and vertically (x, y coordinates) to create clickable point. It also gives an alternative for quicker access to focuable elements on a web page, rather than waiting for a page to be parsed and scanned piece by piece.

After learning more about switches and switch control, I can better understand the many switch control settings on my iPhone.

I usually think of switches being used by people with motor disabilities, but there are other people who use it, too. Some people with intellectual or learning disabilities may use a switch because a mouse, keyboard, or game controller is just too complex to use.

Switch access includes devices that can receive input from almost any body part. Actions may include, but are not limited to:

  • sip-puff
  • push
  • pull
  • press
  • blink
  • squeeze
  • twitch

Windows 10 has eye control as a method of switch access. Apple doesn’t have this feature… yet.

Android mobile devices have switch access much like Apple’s feature.

In review of what I learned last week

Restating from my article mentioned at the beginning, designers and developers need to review WCAG’s operable principle for switch control accessibility. It can’t be emphasized enough that if your website is following those guidelines and success criteria, your website will be accessible to switch users and many other users.

What front-end developers can do for switch control users:

  • make the website keyboard accessible so all elements are reachable
  • place key elements above the fold to relieve tedious scrolling
  • allow alternative to advanced gestures, like hover over and drag-and-drop
  • use larger text for readability from a further distance of user between screen
  • avoid time limits or allow user to increase time limit
  • tolerates user error
  • provide alternative navigation methods to skip over lists of links, repetitive sections, and lengthy text
  • offer autocomplete, autofill, or autosave
  • manage off-screen items appropriately (display:none, visibility:hidden when out of view)
  • provide clear focus outlines

 

 

Day 64: Experimenting with Dictation & Speech Recognition

While working through my study session a few days ago about identifying a11y issues for people who use voice input to navigate the web, I ran across an interesting tweet thread started by Rob Dodson:

So funny that this stuck out on my feed last night as I was finishing up my blog post! Admittedly, it sparked my curiosity about how ARIA affects voice dictation users, and spurred me on further to start testing with the different platforms that are available to people who need full voice navigation.

Things I accomplished

  • Experimented with Apple Dictation and Siri combination (with brief use of VoiceOver)
  • Experimented with Windows speech recognition in Cortona company
  • Attempted to write some of this blog post with a combination of Apple Dictation and Windows Speech Recognition.

What I learned today

Disclaimer: I am not a voice input software user. This is VERY new to me, so lean very lightly on my “experience” and what I’m learning.

Learning curve and first-use exposure aside, Apple’s Dictation feature didn’t seem to have enough reach or intuition to do the multitasking I wanted to do. Additionally, I found that I had to keep reactivating it (“Computer…”) to give commands. There was no continuity. Apparently, I’m not the only one disappointed with the lack of robustness Dictation + Siri has to offer. Read another person’s feelings in Nuance Has Abandoned Mac Speech Recognition. Will Apple Fill the Void?

Here is Apple’s dictation commands dialog that I had off to the side of my screen as I worked with it enabled:

Apple Dictation Commands dialog, starting with Navigation.

The number system to access links appears to be universal across dictation software. I found it in Dictation and Speech Recognition. And I know Dragon has it, too.

Windows Speech Recognition was just as awkward for me. However, I felt it was built more to include navigating my computer, and not solely dictation of documents. Microsoft has a handy Speech Recognition cheatsheet.

Here is the Windows Speech Recognition dock, that reminds me of what I’ve seen online with Dragon software:

Speech Recognition dock "Listening".

I found myself struggling to not use my keyboard or mouse. If I had to rely on either of these OS built-in technologies, I think I’d definitely invest in something more robust. Eventually, I want to get a hold of Dragon NaturallySpeaking to give that a try for comparison.

For people who can use the keyboard along with their speech input, there are keyboard shortcuts to turn listening on and off:

  • Apple: Fn Fn
  • Windows: Ctrl + Windows

By far, this was the hardest thing for me to test with. I think that’s due to the AI relationship with me and my computer. It was nothing like quickly turning on another piece of software and diving right in. Instead, it required that it understand me clearly, which didn’t happen often. Ultimately, it will take some time for me to get comfortable with testing my web pages with speech recognition software. Until then, I’ll be heavily leaning on other testing methods as well as good code and design practices.

As a final note, based on the above statement, I think purchasing more robust speech recognition software, like Dragon, would be a harder sell to my employer when it comes to accessibility testing. It’s a hard enough sell for me to want to purchase a Home edition license for my own personal use and testing freelance projects.

Day 63: Practice with JAWS

Back to playing with assistive technology. I wanted to mess around with speech input software, but I started the process and realized that will be a weekend project, due to the learning curve. So I settled on working in JAWS today to continue learning assistive technologies and what experience they provide.

Note: JAWS is really robust and considered top-notch in the screen reader industry. By no means, am I an expert at using JAWS. However, I need more practice with it, since I lean more heavily on NVDA for screen reader testing.

Things I accomplished

What I learned today

JAWS stands for “Job Access With Speech”.

The cursor on the screen blinks REALLY fast when JAWS has been activated.

Some of JAWS basic keyboard commands are very similar to NVDA’s (or vice versa). That was extremely helpful when experimenting with it. That made me happy when thinking about one of my blind friends that recently made the switch from JAWS to NVDA. It likely made her transition a whole lot easier! (now I’ll have to ask her about it)

I used Insert + F3 a lot to move more quickly through a page’s regions and interactive areas. I liked how many options I had on their built-in navigation feature. However, I did accidentally discover a browser difference with Virtual HTML Feature when I switched over to my Firefox window to add notes to this post (colors are funny because my system was in High Contrast Mode at that time).

Firefox with Insert + F3:

JAWS Find dialog window.

IE with Insert + F3:

Virtual HTML Features dialog.

The built in PDF reader in IE didn’t seem to register any regions with the Deque cheatsheet, like NVDA with Firefox did. So I couldn’t quickly navigate between tables within the browser.

I really like how I could sort links by visited, unvisited, tab order or alphabetically! Plus, I could either go to link or activate the link as soon as I found it in this list.

JAWS Links List dialog.

JAWS had a few more customization choices than NVDA:

JAWS Settings Center dialog.

My bad: I inadvertently found a few photos on a site I manage that needs some alternative text because they are not decorative.

 

Day 62: Identifying A11y Issues for Voice Input Users

Speech input software is an assistive technology and strategy that people use when they have difficulty using a keyboard or mouse. This may include people with motor, visual, or cognitive disabilities. In the 21st century, it’s an excellent alternative for people in all walks of life.

Things I accomplished

Watched:

Read:

What I learned today

Windows 10 has built-in speech recognition?? It sounds like a combination of Cortana and Speech Recognition could be a cheap alternative to Dragon, but I’d need to experiment a bit with both to compare.

Apple has a Dictation feature. So, somewhat like Windows, a combination of Siri and Dictation could be used. I’ve avoided setting up dictation just because of the privacy flag that pops up when it asks permission to connect to an Apple server and learn from your voice over the Internet. Maybe I’m just paranoid and they all actually work that way?

Dragon offers some ARIA support, but it appears to be limited, and should be tested if relying on aria-label, specific roles, etc.

Love this catchphrase from the Web accessibility perspectives video:

“Web accessibility: essential for some, useful for all.”

Challenges that people who use speech recognition software face on the web:

  • carousels that move without a pause button
  • invisible focus indicators
  • mismatched visual order and tab order
  • mismatched linked image with text and alternative text
  • duplicate link text (e.g. Read More) that leads to different places
  • form controls without labels
  • hover only menus (MouseGrid can struggle accessing these)
  • small click targets
  • clickable items that don’t look clickable
  • too many links

Designers and developers should focus on WCAG’s Operable principle. In particular, Navigable guideline’s success criteria would apply here. If many of those success criteria are met with other users in mind, it will definitely be beneficial to speech recognition users, too.

In the past, I haven’t personally been interested in software, like Dragon, yet looking from an accessibility point of view, I’m ready to start testing with speech input technology to better understand how it works and affects people who rely on it when interacting with the web.

Day 61: Identifying A11y Issues for Users Who Magnify Their Screen

Things I accomplished

Read:

Watched:

What I learned today

Windows has a built-in magnifier, as does Apple, but it often isn’t always strong enough or robust to help everyone with low vision. Alternative magnification software includes:

For mobile, I knew Apple phones and tablets had zoom built in, but Android devices have magnification built-in, too.

Apple Watch has a zoom feature (YouTube)!

Trying to learn all things accessibility, I’m constantly having to rediscover keyboard shortcuts:

  • Windows Magnifier: Windows + +
  • Apple Zoom: Option + Cmd + 8

 

Never assume that two low vision people are alike. Everyone with low vision has their underlying reasons of why they struggle with that disability. The point is to add flexibility for their particular experience with low vision and the strategies they use to access content and services on the web.

Challenges people who enable magnification may encounter on the web:

  • text as images become blurry and pixelated when magnified
  • unclearly marked sections/landmarks can make navigation slow when a user only see a small portion of the screen and they’re trying to differentiate navigation from main content from a footer
  • headings that look too much like paragraph text
  • unclear link text
  • scrolling, flashing, or moving objects (carousels, I’m glaring at you again)
  • drawn out content that doesn’t provide a quick intro or conclusion at the beginning
  • horizontal scrolling
  • page content referred to by it’s position (e.g. “to the right”)
  • meaning is conveyed by color alone
  • forms with fields and labels that are not close together or positioned on one line together

WebAIM’s advice:

“The general rule when designing for low vision is to make everything configurable. If the text is real text, users can enlarge it, change its color, and change the background color. If the layout is in percentages, the screen can be widened or narrowed to meet the user’s needs. Configurability is the key.”

WCAG supports people with low vision through it’s perceivable principle. 15 reasons to consider designing to include low vision users who magnify their screen:

  • 1.1.1 Non-text content (A)
  • 1.3.1 Info and relationships (A)
  • 1.3.3 Sensory characteristics (A)
  • 1.3.4 Orientation (AA)
  • 1.4.1 Use of color (A)
  • 1.4.3 Contrast (minimal) (AA)
  • 1.4.4 Resize text (AA)
  • 1.4.5 Images of text (AA)
  • 1.4.6 Contrast (enhanced) (AAA)
  • 1.4.8 Visual presentation (AAA)
  • 1.4.9 Images of text (no exception) (AAA)
  • 1.4.10 Reflow (AA)
  • 1.4.11 Non-text contrast (AA)
  • 1.4.12 Text spacing (AA)
  • 1.4.13 Content on hover or focus (AA)

Day 59: Identifying A11y Issues for High Contrast Mode Users

Ok, ok… so High Contrast Mode (HCM) isn’t explicitly listed in the WAS Body of Knowledge under the current section I’m in, but it’s an area of testing that is significant to me. I’m interested in seeing how my content looks when it undergoes transformation created by the system. And I wanted to take time to think about what other’s using it may experience and strategies they may have to use when something can’t be seen after that transformation.

Additionally, it’s such a cheap and easy way to test that I like to encourage other designers and developers to use it as well. It is not insignificant to the people who use your sites and might be using HCM or inverted colors.

One last thing I’d like to mention before sharing what I did and learned… I actually like using HCM on Windows. It has improved greatly over the past few years (I didn’t enjoy it when I first tried it). Oddly enough, a Dark Mode feature has been popping up more and more across applications and systems, so that has provided me with an alternative, too. I don’t use HCM on a regular basis, but I’ve used it for relief before in a bright office with bright windows and three bright monitors glaring around me. I experience light sensitivity, so it provides me with a solution to continue working at my computer without contributing to headaches.

Things I accomplished today

What I learned today

Something I always have to look up when I want to test with HCM is the keyboard shortcut to activate it: Left Shift + Left Alt + Print Screen. The same key combination will exit HCM.

Not all of my Windows custom colors come back to life after using High Contrast Mode. Weird.

Invert colors, which is completely different experience to me, on macOS can be activated by Control + Option + Command + 8.

HCM offers some personal customization of colors. I played with it some and settled on the High Contrast #1 preset that offered black background and yellow text. Then I tweaked hyperlink colors to stand out more in my browser (Firefox).

HCM benefits people with visual and cognitive disabilities, as well as people with environmental impairments. Some examples:

  • low vision
  • Irlen syndrome
  • bright environments like outdoors
  • low-light environments

Not surprisingly, WCAG comes into play here: SC 1.4.8 Visual Presentation. Yes, that’s inside the Perceivable principle!

The last point brought home the issue that we can never assume how someone else’s system is set up. Default HCM offers white text on black background. But that doesn’t work for everyone, dependent upon their visual needs and preferences. The best we can do is follow some core principles to enable people to perceive our content:

  • Give focusable items some sort of visual indicator like a border or highlight (we’re doing it for our keyboard users anyway, right?)
  • Don’t use background images to deliver important content
  • Be considerate of foreground and background colors and how they can change drastically, dependent on the user’s system settings
  • Don’t rely on color alone to convey important information
  • Take advantage of the flexibility of SVGs, currentColor, and buttonText
  • Use high contrast icons, even without considering HCM
  • Add or remove backgrounds that affect HCM users
  • Use semantic markup to improve user experience
  • Always manually test with HCM yourself at the beginning of design and end of development

Firefox partially support HCM code, and Chrome doesn’t support it at all. Microsoft supports it though with:

@media (-ms-high-contrast: active) {}

For the most part, I was pleasantly surprised that I had no trouble seeing all components on my screen throughout the Windows system, as well as elements on familiar web pages that I frequent. There were a few exceptions, but at least I knew when things were present, even if I couldn’t see him. Not great for someone new to those sites, though.

Working in a CKEditor today, I discovered they had a neat trick for people using HCM. The editor icons were no longer icons; they were plain text. Kind of neat! Read further ahead to see more of my experience.

More on CKEditor

As I mentioned under “What I learned today”, my HCM encounter with plain text transformation from icons in a CKEditor was a surprise:

CKEditor toolbar with all tool buttons using plain text as labels.

I had to turn off HCM just to remember what I was used to looking at:

CKEditor toolbar with buttons using icons as labels.

Naturally that got my very curious. So, I visited the CKEditor website and dug into their documentation. Indeed, they have they’re own support for HCM. Some one put some thought into it! The same transformation did not happen as I wrote this post in WordPress with their TinyMCE editor.

Day 56: Practice with Narrator

Back to exploring more assistive tech, specifically screen readers. Today I experimented for the first time with Narrator, the built-in screen reader for Windows. I was a bit apprehensive at first since it has not been on my priority list to learn, knowing that only a mere 0.3% of desktop screen reader users actually use Narrator, according to WebAIM’s latest Screen Reader Use Survey. However, it’s free and built-in for Windows users (and it’s mentioned in the WAS Body of Knowledge study material), so I’m giving it a chance.

Things I accomplished

What I learned today

  • Turn on Narrator with shortcut keys: Windows (logo) key + Ctrl + Enter.
  • Narrator was finicky with Firefox, my preferred browser, but Edge is recommended as the best web browser when using this screen reader.
  • Narrator has a Developer View (Caps Lock + Shift + F12), which masks the screen, highlighting only the objects and text exposed to Narrator.
  • By default, Narrator presents an outline around the parts of the webpage that it is reading aloud. I found this handy to keep up with where it was at.
  • It has touch gestures. I suppose that makes sense when not all Windows computers are only desktop computers.
  • Accessible documents are important. (I knew this already) I was able to easily navigate between tables on the Deque PDF cheatsheet with the T key because they made it with accessibility in mind.

There is still so much to learn! Jumping between screen reader programs leaves my head spinning with all the shortcut keys I’d need to know. I’ll come back to this screen reader at some point because one hour of use is not enough to get fully comfortable with it. I also need to expand upon my cheatsheet to include more commands/tasks. Currently, it’s just a quick guide to the most frequent tasks I’ve needed.

An Aside: Fun A11y Resource

 

Day 54: Users with Motoric Disabilities

More on people with various disabilities. Today’s exploration led me to learn more about people with different motor disabilities. This group may include people with cerebral palsy, multiple sclerosis, quadriplegia, and arthritis.

Things I accomplished

What I learned today

When considering people with motor disabilities, web designers and developers should hold fast to WCAG’s operable principle. Specific important concepts includes creating a usable interface that:

  • is keyboard navigable (this also benefits voice activated software)
  • tolerates user error
  • provides alternative navigation methods to skip over lists of links, repetitive sections, and lengthy text
  • sets important stuff above the fold
  • offers autocomplete, autofill, or autosave
  • enables extended time limits
  • manages off-screen items appropriately (display:none, visibility:hidden when out of view)
  • provides clear focus outlines
  • provides large target (clickable) areas (buttons, links, controls)

There are one-handed keyboards for people with the use of only one hand. Other assistive technologies that can be used by those with more severe paralysis include head wands, mouth sticks, voice recognition software, switch access, and eye-tracking.

 

Day 51: Users with Low Vision

Continuing on through the WAS Body of Knowledge, I’m currently working through concepts that involve building websites that accommodate strategies used by people with disabilities. Today I focused on those with low vision. I’m personally familiar with the group the most, and yet the strategies that people with low vision use to access web content can vary greatly. So, I consider there is still room for me to learn here.

Things I accomplished

Watched:

Read:

What I learned today

There are several low vision users that use screen readers, but often times they make the most out of the vision they do have by:

  • Using text enlargement and zoom in the browser
  • Changing colors, contrast, or fonts in the browser or operating system
  • Using magnifying tools
  • Using keyboard commands in conjunction with mouse to speed up interaction

ZoomText Magnifier/Reader is a Freedom Scientific product (the same company that produces JAWS). It appears to be a very robust program, offering enhancements to increase visibility of content, cursor, and focus. Additionally, it has a screen reader function, and has a toolbar that lets the user search and find by text, headings, lists, tables, etc (unified finder). ZoomText and JAWS can work together.

“VoiceOver can describe images to you, such as telling you if a photo features a tree, a dog, or four smiling faces. It can also read aloud text in an image — whether it’s a snapshot of a receipt or a magazine article — even if it hasn’t been annotated.” WOW. I tried this on my iPhone and verified that it could describe a picture of my son outside in the snow. My mind was BLOWN. This technology makes me very happy for one of my blind friends!

iOS magnification can jump from 100% to 1500%. Android phones have magnification, too.

High contrast text, color inversion, and color correction are available on Android 5.0+, however, they are still considered experimental features. That’s interesting, considering these are solid accessibility options on iPhones.