Day 83: Prioritizing Remediation of A11y Issues

During today’s study session, I walked away with a lot of new-to-me information and useful steps to apply to my current work.

Things I accomplished

What I learned today

When starting an accessibility remediation project, start with a site’s core functionalities. Determine the issue’s origin (markup, style, functionality), then prioritize accessibility issues by severity of:

  • impact on users: does the problem have a user workaround or are they completely inhibited (blocked) from using a core functionality?
  • legal risk: related to user impact; is it a legal risk (based on functionality block and type of organization) or just a usability issue? take note of perceivability and repeat offenders
  • cost benefit: is the ROI greater than the time invested to remediate or lawsuit that may occur?
    e.g. ROI = ((Risk Amount – Investment) / Investment) * 100
  • level of effort to remediate (impact on business): how many changes (and where) have to be made?

WCAG conformance levels and success criteria are not the way to determine priority of remediation.

As mentioned in my notes about manual versus automated testing tools, it’s always best to target low-hanging fruit to begin quickly resolving issues.

When receiving an audit to proceed to remediation, people want to know:

  • where the problems are
  • what the problems are
  • how to fix them
  • not the specific technical guidelines and success criteria

Remediation is a hard lesson to learn in realizing that if things are made accessible from the start, less time and money is wasted.

Time is money. Just because you save time taking down inaccessible materials, time is added (technical debt shifted) to help desk lines or other resources.

I really liked Michigan State University’s accessibility severity scale:

  1. Level 4, Blocker: Prevents access to core processes or many secondary processes; causes harm or significant discomfort.
  2. Level 3, Critical: Prevents access to some secondary processes; makes it difficult to access core processes or many secondary processes.
  3. Level 2, Major: Makes it inconvenient to access core processes or many secondary processes.
  4. Level 1, Minor: Makes it inconvenient to access isolated processes.
  5. Level 0, Lesser: Usability observation.

Remediation procedure levels by Karl Groves:

  • simple: prioritization: time versus impact (user-centric)
  • advanced prioritization: scoring business and user impact (broken down by user type)
    (Impact + Repair Speed + Location + Secondary Benefits) * Volume = Priority

Best quote from today’s Deque course

Accessibility does not happen by accident. It has to be purposefully planned, built, and tested for accessibility.

Day 81: Manual vs. Automated A11y Testing Tools

Today I went into my study time with the intent to list out pros and cons of automated versus manual accessibility testing. Instead I walked away with a comparison of what each had to offer, and understanding that both are valuable when used cooperatively during website and web app development.

Things I accomplished

Submitted my request to take the Web Accessibility Specialist certification exam in early April via private proctor.

Read:

Created a comparison table to jot down ideas about manual and automated testing (see under What I learned today).

What I learned today

Manual Testing Automated Testing
Slower process Faster process
Mostly accurate Sometimes accurate
Easier to miss a link Guaranteed check of all links
Identifies proper state of elements Automated user input can miss state
Page by Page Site-wide
Assurance of conformance Misleading in assurance of conformance
Guidance for alternative solutions Yes/No (boolean) checks and solutions
Human and software Software
Context Patterns
Finds actual problems Lists potential problems
Appropriate HTML semantics HTML validation
Accurate alt text Existence of alt attribute
Heading hierarchy Headings exist
Follows intention of usability Follows WCAG success criteria
Test is/isn’t readable Programmatic color contrast
Exploratory Automated
Part of the testing process Part of the testing process
Appropriate use of ARIA Presence and validity of ARIA
In real life Hypothetical
Identifies granular challenges of usability Quickly identifies low-hanging fruit and repeated offenders

In conclusion

Deciding on testing methods and tools shouldn’t be an either-or mandate. Each has their strengths and weaknesses. Using both methods should be a part of every testing process. Why not strengthen your product’s usability by incorporating tools from each methodology into your process?

Day 80: Manual A11y Testing Tools

Yesterday I browsed through automated accessibility testing tools. Today, per their mention in the WAS Body of Knowledge, I discovered some manual accessibility testing tools that offer more insight into problems that can’t be caught in automated reports. These tools go beyond the easy checks, like color contrast, headings, and keyboard access, that I’m used to checking for.

Tomorrow I hope to dig in a bit deeper to compare the difference between automated and manual testing, along with the drawbacks of each.

Things I accomplished

What I learned today

Manual testing tools, much like automated testing tools, offer reports and automated tests for all audiences within the development process to get a start on addressing accessibility issues. The advantage that manual tooling provides is that it offers additional guidance and education to fix problems that cannot be systematically evaluated through automated checkpoints. However, no tooling replaces human judgement and end-user testing.

Manual testing tools can include:

  • guided manual testing and reports, based on heuristics (WorldSpace Assure)
  • browser inspector tools and add-ons (accessibility audit in Chrome DevTools)
  • accessibility API viewers (Accessibility Viewer views the a11y tree)
  • simulators (No Coffee visual disabilities simulation)
  • single (heading levels) and multi-purpose (many checkpoints) accessibility tools

Another observation about manual testing tools, they may take more time to work through results, but there are many more of these tools that are free to use compared to automated full website testing.

Though I found that many manual testing tools seem to fall with between the development and testing, there are some system-wide tools that help earlier on in the life cycle. Color Oracle is one such application that can assist designers during the earlier design process before any code is written. It takes colorblindness into consideration at the beginning of the site’s life cycle.

An Aside

Ran across an accessibility basics article by Microsoft, and loved this catchphrase:

“Accessibility is a built-in, not a bolt on.”

Day 79: Automated A11y Testing Tools

Moving onto another section in the WAS Body of Knowledge, quickly approaching the end. I’m postponing going over the “Test for End-user Impact” section in order to work through the “accessibility testing tools” section. The summary says it all for me:

“No accessibility software tool can find all the accessibility issues on a web site, but software tools can expedite the process of finding accessibility issues, and increase the overall accuracy when supplemented by a skilled manual evaluation of the same content.”

Or, as the Web Accessibility Initiative (WAI) sums it up:

“We cannot check all accessibility aspects automatically. Human judgement is required. Sometimes evaluation tools can produce false or misleading results. Web accessibility evaluation tools can not determine accessibility, they can only assist in doing so.”

Things I accomplished

What I learned today

I hadn’t considered this before, but not all tools are meant to target one audience (developers). Each tool is created with a specific audience in mind, whether it be:

  • designers,
  • developers,
  • non-technical content authors,
  • quality assurance testers, and
  • end-users

There are SO many options. How intimidating for anyone trying to decide what software, plug-in, or consultant to use!

Automated testing involves different considerations based on audience, need, conformance standard and level, site complexity, and accessibility experience. Various types of automated testing include:

  • site-wide scanning and reporting (SortSite, Tenon.io, AMP)
  • server-based page analysis from one page to entire site (Cynthia Says, SiteImprove)
  • browser-based developer/QA plug-ins that evaluate one page at a time (WAVE, AInspector)
  • unit testing during development (aXe API)
  • integration testing before deployment (aXe API)

It strikes me that using a combination of tools with differing purposes could help speed up the process and ensure accuracy even more. By no means, would they replace manual checks and end-user testing, but it’s incentive to not pick just one tool to do a job meant for several tools.