Day 89: Writing a Real Accessibility Evaluation

Today I had the joy of practicing what I’ve been learning over the last 88 days. I’ve been itching to work through a formal evaluation, and this real opportunity came up.

Things I accomplished

  • Presented an accessibility issue to an accessibility working group that I chair.
  • Wrote an official evaluation to a separate working group to present aforementioned findings in a formal way.

What I reviewed today

First, I gathered the details that I’d written down. Next, I walked through the WCAG-EM Report Tool to reaffirm my findings, as well as add to them. Lastly, I composed a formal evaluation of the features that were not in conformance with WCAG 2.1, Level AA.

The base outline of my evaluation:

  1. Overview
  2. WCAG-EM evaluation
    1. Scope
    2. Failures
  3. QA testing
    1. Automated testing
    2. Manual testing
    3. Personas
  4. Remediation recommendations

What I learned from it

The process of evaluation can take a quite a bit of time, especially for someone who is new. I had to re-read many WCAG success criteria over again, limit my scope, and deeply think about what techniques were sufficient, advisory, or failures. Additionally, I encountered some bad practice and some not-so-optimal coding solutions, but refrained from delving into those since they are were not part of the aim of conformance. It was a true learning experience, and one I hope to refine over the next few years!

Day 82: Testing with Users with Disabilities

Today’s study session led me back to usability testing. This seems to be critical when it comes to adding it to our testing toolbox to check for usability and accessibility issues that escape conformance checks, alongside automated and manual testing tools.

Personally, this is one of the areas I struggle with implementing. I love reading about usability testing and case studies that people document, but I’ve not yet taken the opportunity to try doing this myself. Usually because it has it’s own added cost, as well as awkwardness to set up testing with a specific group of people. Maybe this will be my motivator to make some connections and start a plan to make this happen this year.

Things I accomplished

Read:

What I learned today

The guidelines are not all-inclusive. Some good accessibility techniques may not be in WCAG because:

  • It is difficult to objectively verify compliance with the technique
  • The writers of the guidelines did not recognize the need for the technique when writing the guidelines.
  • The technique was not necessary (or at least not anticipated) at the time the guidelines were written, because the technologies or circumstances that require the technique are newer than the guidelines.

Before bringing in users for testing, do some preliminary checks and fix known issues in order to better discover underlying accessibility and usability challenges that were not detectable by software or manual checks.

Including users in testing doesn’t have to be a full-blown usability study. Informal evaluations and brief interactions with feedback can be very helpful. Additionally, informal evaluations can happen throughout the product’s lifecycle, rather than formal usability studies that usually occur near the end of development. Bonus: informal interactions can help us all see the person clearer rather than a case study.

Never assume that feedback from one person with a disability speaks for all people with disabilities. A small-scale evaluation (only a few people within a study) is not enough to draw solid conclusions with statistical significance, even though valuable insight occurs. Try to include a variety of disabilities: auditory, cognitive, neurological, physical, speech, and visual with different characteristics. If possible, include older people, as well.

Further reading

 

Day 81: Manual vs. Automated A11y Testing Tools

Today I went into my study time with the intent to list out pros and cons of automated versus manual accessibility testing. Instead I walked away with a comparison of what each had to offer, and understanding that both are valuable when used cooperatively during website and web app development.

Things I accomplished

Submitted my request to take the Web Accessibility Specialist certification exam in early April via private proctor.

Read:

Created a comparison table to jot down ideas about manual and automated testing (see under What I learned today).

What I learned today

Manual Testing Automated Testing
Slower process Faster process
Mostly accurate Sometimes accurate
Easier to miss a link Guaranteed check of all links
Identifies proper state of elements Automated user input can miss state
Page by Page Site-wide
Assurance of conformance Misleading in assurance of conformance
Guidance for alternative solutions Yes/No (boolean) checks and solutions
Human and software Software
Context Patterns
Finds actual problems Lists potential problems
Appropriate HTML semantics HTML validation
Accurate alt text Existence of alt attribute
Heading hierarchy Headings exist
Follows intention of usability Follows WCAG success criteria
Test is/isn’t readable Programmatic color contrast
Exploratory Automated
Part of the testing process Part of the testing process
Appropriate use of ARIA Presence and validity of ARIA
In real life Hypothetical
Identifies granular challenges of usability Quickly identifies low-hanging fruit and repeated offenders

In conclusion

Deciding on testing methods and tools shouldn’t be an either-or mandate. Each has their strengths and weaknesses. Using both methods should be a part of every testing process. Why not strengthen your product’s usability by incorporating tools from each methodology into your process?

Day 80: Manual A11y Testing Tools

Yesterday I browsed through automated accessibility testing tools. Today, per their mention in the WAS Body of Knowledge, I discovered some manual accessibility testing tools that offer more insight into problems that can’t be caught in automated reports. These tools go beyond the easy checks, like color contrast, headings, and keyboard access, that I’m used to checking for.

Tomorrow I hope to dig in a bit deeper to compare the difference between automated and manual testing, along with the drawbacks of each.

Things I accomplished

What I learned today

Manual testing tools, much like automated testing tools, offer reports and automated tests for all audiences within the development process to get a start on addressing accessibility issues. The advantage that manual tooling provides is that it offers additional guidance and education to fix problems that cannot be systematically evaluated through automated checkpoints. However, no tooling replaces human judgement and end-user testing.

Manual testing tools can include:

  • guided manual testing and reports, based on heuristics (WorldSpace Assure)
  • browser inspector tools and add-ons (accessibility audit in Chrome DevTools)
  • accessibility API viewers (Accessibility Viewer views the a11y tree)
  • simulators (No Coffee visual disabilities simulation)
  • single (heading levels) and multi-purpose (many checkpoints) accessibility tools

Another observation about manual testing tools, they may take more time to work through results, but there are many more of these tools that are free to use compared to automated full website testing.

Though I found that many manual testing tools seem to fall with between the development and testing, there are some system-wide tools that help earlier on in the life cycle. Color Oracle is one such application that can assist designers during the earlier design process before any code is written. It takes colorblindness into consideration at the beginning of the site’s life cycle.

An Aside

Ran across an accessibility basics article by Microsoft, and loved this catchphrase:

“Accessibility is a built-in, not a bolt on.”

Day 79: Automated A11y Testing Tools

Moving onto another section in the WAS Body of Knowledge, quickly approaching the end. I’m postponing going over the “Test for End-user Impact” section in order to work through the “accessibility testing tools” section. The summary says it all for me:

“No accessibility software tool can find all the accessibility issues on a web site, but software tools can expedite the process of finding accessibility issues, and increase the overall accuracy when supplemented by a skilled manual evaluation of the same content.”

Or, as the Web Accessibility Initiative (WAI) sums it up:

“We cannot check all accessibility aspects automatically. Human judgement is required. Sometimes evaluation tools can produce false or misleading results. Web accessibility evaluation tools can not determine accessibility, they can only assist in doing so.”

Things I accomplished

What I learned today

I hadn’t considered this before, but not all tools are meant to target one audience (developers). Each tool is created with a specific audience in mind, whether it be:

  • designers,
  • developers,
  • non-technical content authors,
  • quality assurance testers, and
  • end-users

There are SO many options. How intimidating for anyone trying to decide what software, plug-in, or consultant to use!

Automated testing involves different considerations based on audience, need, conformance standard and level, site complexity, and accessibility experience. Various types of automated testing include:

  • site-wide scanning and reporting (SortSite, Tenon.io, AMP)
  • server-based page analysis from one page to entire site (Cynthia Says, SiteImprove)
  • browser-based developer/QA plug-ins that evaluate one page at a time (WAVE, AInspector)
  • unit testing during development (aXe API)
  • integration testing before deployment (aXe API)

It strikes me that using a combination of tools with differing purposes could help speed up the process and ensure accuracy even more. By no means, would they replace manual checks and end-user testing, but it’s incentive to not pick just one tool to do a job meant for several tools.

 

Day 64: Experimenting with Dictation & Speech Recognition

While working through my study session a few days ago about identifying a11y issues for people who use voice input to navigate the web, I ran across an interesting tweet thread started by Rob Dodson:

So funny that this stuck out on my feed last night as I was finishing up my blog post! Admittedly, it sparked my curiosity about how ARIA affects voice dictation users, and spurred me on further to start testing with the different platforms that are available to people who need full voice navigation.

Things I accomplished

  • Experimented with Apple Dictation and Siri combination (with brief use of VoiceOver)
  • Experimented with Windows speech recognition in Cortona company
  • Attempted to write some of this blog post with a combination of Apple Dictation and Windows Speech Recognition.

What I learned today

Disclaimer: I am not a voice input software user. This is VERY new to me, so lean very lightly on my “experience” and what I’m learning.

Learning curve and first-use exposure aside, Apple’s Dictation feature didn’t seem to have enough reach or intuition to do the multitasking I wanted to do. Additionally, I found that I had to keep reactivating it (“Computer…”) to give commands. There was no continuity. Apparently, I’m not the only one disappointed with the lack of robustness Dictation + Siri has to offer. Read another person’s feelings in Nuance Has Abandoned Mac Speech Recognition. Will Apple Fill the Void?

Here is Apple’s dictation commands dialog that I had off to the side of my screen as I worked with it enabled:

Apple Dictation Commands dialog, starting with Navigation.

The number system to access links appears to be universal across dictation software. I found it in Dictation and Speech Recognition. And I know Dragon has it, too.

Windows Speech Recognition was just as awkward for me. However, I felt it was built more to include navigating my computer, and not solely dictation of documents. Microsoft has a handy Speech Recognition cheatsheet.

Here is the Windows Speech Recognition dock, that reminds me of what I’ve seen online with Dragon software:

Speech Recognition dock "Listening".

I found myself struggling to not use my keyboard or mouse. If I had to rely on either of these OS built-in technologies, I think I’d definitely invest in something more robust. Eventually, I want to get a hold of Dragon NaturallySpeaking to give that a try for comparison.

For people who can use the keyboard along with their speech input, there are keyboard shortcuts to turn listening on and off:

  • Apple: Fn Fn
  • Windows: Ctrl + Windows

By far, this was the hardest thing for me to test with. I think that’s due to the AI relationship with me and my computer. It was nothing like quickly turning on another piece of software and diving right in. Instead, it required that it understand me clearly, which didn’t happen often. Ultimately, it will take some time for me to get comfortable with testing my web pages with speech recognition software. Until then, I’ll be heavily leaning on other testing methods as well as good code and design practices.

As a final note, based on the above statement, I think purchasing more robust speech recognition software, like Dragon, would be a harder sell to my employer when it comes to accessibility testing. It’s a hard enough sell for me to want to purchase a Home edition license for my own personal use and testing freelance projects.

Day 63: Practice with JAWS

Back to playing with assistive technology. I wanted to mess around with speech input software, but I started the process and realized that will be a weekend project, due to the learning curve. So I settled on working in JAWS today to continue learning assistive technologies and what experience they provide.

Note: JAWS is really robust and considered top-notch in the screen reader industry. By no means, am I an expert at using JAWS. However, I need more practice with it, since I lean more heavily on NVDA for screen reader testing.

Things I accomplished

What I learned today

JAWS stands for “Job Access With Speech”.

The cursor on the screen blinks REALLY fast when JAWS has been activated.

Some of JAWS basic keyboard commands are very similar to NVDA’s (or vice versa). That was extremely helpful when experimenting with it. That made me happy when thinking about one of my blind friends that recently made the switch from JAWS to NVDA. It likely made her transition a whole lot easier! (now I’ll have to ask her about it)

I used Insert + F3 a lot to move more quickly through a page’s regions and interactive areas. I liked how many options I had on their built-in navigation feature. However, I did accidentally discover a browser difference with Virtual HTML Feature when I switched over to my Firefox window to add notes to this post (colors are funny because my system was in High Contrast Mode at that time).

Firefox with Insert + F3:

JAWS Find dialog window.

IE with Insert + F3:

Virtual HTML Features dialog.

The built in PDF reader in IE didn’t seem to register any regions with the Deque cheatsheet, like NVDA with Firefox did. So I couldn’t quickly navigate between tables within the browser.

I really like how I could sort links by visited, unvisited, tab order or alphabetically! Plus, I could either go to link or activate the link as soon as I found it in this list.

JAWS Links List dialog.

JAWS had a few more customization choices than NVDA:

JAWS Settings Center dialog.

My bad: I inadvertently found a few photos on a site I manage that needs some alternative text because they are not decorative.

 

Day 59: Identifying A11y Issues for High Contrast Mode Users

Ok, ok… so High Contrast Mode (HCM) isn’t explicitly listed in the WAS Body of Knowledge under the current section I’m in, but it’s an area of testing that is significant to me. I’m interested in seeing how my content looks when it undergoes transformation created by the system. And I wanted to take time to think about what other’s using it may experience and strategies they may have to use when something can’t be seen after that transformation.

Additionally, it’s such a cheap and easy way to test that I like to encourage other designers and developers to use it as well. It is not insignificant to the people who use your sites and might be using HCM or inverted colors.

One last thing I’d like to mention before sharing what I did and learned… I actually like using HCM on Windows. It has improved greatly over the past few years (I didn’t enjoy it when I first tried it). Oddly enough, a Dark Mode feature has been popping up more and more across applications and systems, so that has provided me with an alternative, too. I don’t use HCM on a regular basis, but I’ve used it for relief before in a bright office with bright windows and three bright monitors glaring around me. I experience light sensitivity, so it provides me with a solution to continue working at my computer without contributing to headaches.

Things I accomplished today

What I learned today

Something I always have to look up when I want to test with HCM is the keyboard shortcut to activate it: Left Shift + Left Alt + Print Screen. The same key combination will exit HCM.

Not all of my Windows custom colors come back to life after using High Contrast Mode. Weird.

Invert colors, which is completely different experience to me, on macOS can be activated by Control + Option + Command + 8.

HCM offers some personal customization of colors. I played with it some and settled on the High Contrast #1 preset that offered black background and yellow text. Then I tweaked hyperlink colors to stand out more in my browser (Firefox).

HCM benefits people with visual and cognitive disabilities, as well as people with environmental impairments. Some examples:

  • low vision
  • Irlen syndrome
  • bright environments like outdoors
  • low-light environments

Not surprisingly, WCAG comes into play here: SC 1.4.8 Visual Presentation. Yes, that’s inside the Perceivable principle!

The last point brought home the issue that we can never assume how someone else’s system is set up. Default HCM offers white text on black background. But that doesn’t work for everyone, dependent upon their visual needs and preferences. The best we can do is follow some core principles to enable people to perceive our content:

  • Give focusable items some sort of visual indicator like a border or highlight (we’re doing it for our keyboard users anyway, right?)
  • Don’t use background images to deliver important content
  • Be considerate of foreground and background colors and how they can change drastically, dependent on the user’s system settings
  • Don’t rely on color alone to convey important information
  • Take advantage of the flexibility of SVGs, currentColor, and buttonText
  • Use high contrast icons, even without considering HCM
  • Add or remove backgrounds that affect HCM users
  • Use semantic markup to improve user experience
  • Always manually test with HCM yourself at the beginning of design and end of development

Firefox partially support HCM code, and Chrome doesn’t support it at all. Microsoft supports it though with:

@media (-ms-high-contrast: active) {}

For the most part, I was pleasantly surprised that I had no trouble seeing all components on my screen throughout the Windows system, as well as elements on familiar web pages that I frequent. There were a few exceptions, but at least I knew when things were present, even if I couldn’t see him. Not great for someone new to those sites, though.

Working in a CKEditor today, I discovered they had a neat trick for people using HCM. The editor icons were no longer icons; they were plain text. Kind of neat! Read further ahead to see more of my experience.

More on CKEditor

As I mentioned under “What I learned today”, my HCM encounter with plain text transformation from icons in a CKEditor was a surprise:

CKEditor toolbar with all tool buttons using plain text as labels.

I had to turn off HCM just to remember what I was used to looking at:

CKEditor toolbar with buttons using icons as labels.

Naturally that got my very curious. So, I visited the CKEditor website and dug into their documentation. Indeed, they have they’re own support for HCM. Some one put some thought into it! The same transformation did not happen as I wrote this post in WordPress with their TinyMCE editor.

Day 58: Identifying A11y Issues for Keyboard Users

Through studying WCAG (Guideline 2.1) and other web accessibility documentation and articles, I know that keyboard navigability and interoperability is important for a wide variety of users. Some important ideas to focus on when creating websites and keeping keyboard accessibility in mind:

  • Actionable elements (links, buttons, controls, etc.) must receive focusable via keyboard (WCAG SC 2.1.1);
  • All focusable elements need a visible border or highlight as a focus indicator (WCAG SC 2.4.7);
  • Logical (expected) tabbing order is set up appropriately (WCAG SC 2.4.3);
  • No keyboard traps, like poorly developed modals, have been created (WCAG SC 2.1.2).

The best way to test for keyboard accessibility? Test with a keyboard! It’s one of the easiest and cheapest ways to find out if you’re blocking someone from accessing your content. Critical (yet basic) keys to use:

  • Tab
  • Enter
  • Spacebar
  • Arrow keys
  • Esc

If any of those keys fail you when it comes to expected behavior for controls and navigation, it’s time to start digging into the code and figuring out what’s preventing that expected and conventional behavior.

That being said, I’ve started looking at section two of the WAS Body of Knowledge to fill in any gaps I have about identifying accessibility issues, starting with interoperability and compatibility issues. I’ve had a lot of practice checking for keyboard accessibility due to its simplicity, but I leave no stone unturned when it comes to studying for this exam and making sure I’m not missing more gaps in my web accessibility knowledge.

Things I accomplished

What I learned today

Today I didn’t learn much more on top of what I knew already, since I’ve had a lot of practice checking for keyboard accessibility due to its simplicity. However, I leave no stone unturned when it comes to studying for this exam and making sure I’m not missing more gaps in my web accessibility knowledge. Plus, I’m always eager to take the opportunity to advocate for keyboard testing as a reminder to designers and developers that not everyone uses a mouse and even well-written code can produce a different experience than initially expected.

One thing I did learn:

  • “A common problem for drop-downs used for navigation is that as soon as you arrow down, it automatically selects the first item in the list and goes to a new page.” (Keyboard Access and Visual Focus by W3C) I’ll have to watch out for this one I audit other sites, since I have not created a drop down with that type of behavior yet, myself.

Day 34: TalkBack on Android

A diversion from my quality assurance research this week out of necessity of testing with an Android screen reader at work. Time spent today: 2 hours.

Things I accomplished

What I learned today

  • TalkBack wasn’t too different from my experience with VoiceOver gestures. Some minor gesture differences.
  • The equivalent functionality to VoiceOver’s rotor is the Local Context Menu.
  • TalkBack quick access can be updated to a triple-click of the Home button of my S5 to turn it on.
  • TalkBack keyboard events are not the same as touch events. It can be hard to develop for all TalkBack users (some keyboard users, some touch users).
  • 29.5% of respondents to WebAIM’s screen reader survey said they use TalkBack.
  • A two-finger or three-finger swipe navigates me through my multiple screens.
  • The “explore by touch” feature reads focusable items as I drag my finger around the screen.
  • Entering my PIN was easier for me to enter with TalkBack then it was with VoiceOver.

Day 24: Better VoiceOver Practice

Today I came back to practice using VoiceOver (VO) a bit more, since I was struggling with it yesterday. More practice definitely gave me more confidence. It would take a week of consistent use for me to use VO more naturally with my laptop. That’s an aspiration for the near future.

Things I accomplished

  • Walked through all 22 steps of the built-in VoiceOver Quick Start tutorial.
  • Read Chapter 1, 2, and 6 of Apple’s VoiceOver Getting Started Guide.
  • Added keyboard shortcuts to my study spreadsheet.

What I learned today

  • VO has a Trackpad Commander option. This meant that I could use some of the same gestures on my MacBook Pro (MPB) trackpad that I use on my iPhone! This was an important discovery for me, offering me cross-device ease of use.
  • Control + Option + Spacebar selects my choice for interactive components like checkboxes, radio buttons, buttons, etc.
  • I finally got the hang of stepping in and out of different components and windows by using Control + Option + Shift + up/down arrows. For some reason, my brain struggled with this yesterday.
  • Control + Option + D gets me quickly into the dock of my MBP.
  • Control + Option + M goes directly to my MBP menu.
  • Control + Option + K opens keyboard help. When open this will explain what keys do when a key is pressed while holding down the Control + Option keys.
  • Control + Option + H + H opens up a Command help dialog, which lists all the different keyboard shortcuts for specific commands and tasks.
  • Web Spots is a generated list of areas of the current webpage based on VoiceOver’s interpretation of the page’s visual design.
  • Control + Option + ; (semi-colon key) locks the VO modifier keys so you don’t have to keep holding them for shortcut commands. This was a big deal to learn! It seemed ridiculous to keep holding down 2-4 keys at a time while pressing another key.
  • Control + Option + Shift + I creates a verbal overview of the page, including how many headers, links, landmarks, etc.

Day 23: VoiceOver for macOS

Needing to take a break from reading through so much documentation, I decided to spend some time with some assistive technology. Specifically, I practiced navigating with VoiceOver (VO) on my MacBook Pro. Turns out that it was more a challenge than I anticipated!

Things I accomplished

What I learned today

  • I surprised myself be feeling more out of water on my MBP then I did with my iPhone when using VoiceOver. I’ve become fairly familiar with NVDA on Windows, so I really felt like I was having to relearn navigating with a screen reader.
  • Control + Option + U opens the rotor.
  • Sometimes using VO felt complex when having to hold down 4 keys to “quickly” navigate a webpage.

VoiceOver on macOS resources