Day 85: The point – it’s for people with disabilities, Part 1

“…if you don’t understand the challenges that people with disabilities face when using ICT products and services, you don’t really know accessibility. Knowing what challenges people face is central to knowing how to reduce or eliminate challenges.” Karl Groves, What does it take to call yourself an accessibility expert?

That about sums it up for me after working through the WAS Body of Knowledge. The only way one can evaluate websites well is to remember who could be using our sites and how their engagement and experience may differ from our own. That’s basic UX (user experience) design, but with a focus on users with disabilities, which still encompasses a wide range of people and engagement strategies.

Things I accomplished

What I reviewed today

I’m trying to bring my focus back to the “who” part of my training. Without keeping them in the front of my mind, I will not be able to properly advocate for accessibility. Deque’s course highlights the following disabilities:

  • blind
  • low vision
  • color-blind
  • deaf
  • deafblind
  • motor disabilities
  • speech disabilities
  • cognitive disabilities
  • reading disabilities
  • seizures
  • multiple disabilities

Today I read through their explanations about various visual impairments. I found it helpful to revisit things I learned about in the past about users with low vision and identifying issues for keyboard users.

Blind

How they may interact:

  • may navigate by headings, landmarks, links via screen reader software;
  • listen for title and structure details of page via screen reader software;
  • use screen reader software, keyboard, refreshable braille display, touchscreen, or voice commands for input or output

Developer considerations:

  • content needs to be text or coupled with text equivalents (WCAG 1.1)
  • site functionality must work with a keyboard (WCAG 2.1)
  • markup must be structured well, using appropriate semantics (WCAG 1.3, 2.4, & 4.1.1)
  • custom elements must express themselves with a name, role, and value (WCAG 4.1.2)
  • dynamic changes in content comes with an alert for screen readers (WCAG 4.1.3)
  • videos need audio description if the audio is confusing by itself (WCAG 1.2)
  • active controls need to be clickable (WCAG 2.5)

Low Vision

Low vision is a spectrum. It varies in degrees and characteristics.

How they may interact:

  • magnify the entire screen (with magnification software), zoom into web pages, or increase text size
  • increase contrast or invert colors with High Contrast Mode or other software
  • use screen reader to hear text
  • navigate by keyboard or mouse

Developer considerations:

  • popups, alerts, and errors should be close to the visual focus
  • color should not be the only way to relay important information (WCAG 1.4.1)
  • contrast of foreground and background should be no less than 4.5:1 (WCAG 1.4.3, 1.4.6, 1.4.11)
  • don’t disable pinch-to-zoom
  • interactive components (links, buttons, input) need a visible focus and hover state (WCAG 1.4.13)
  • controls need to look different (actionable) than text (WCAG 1.4.8)

Color-blind

This is not an either-or characteristic either. Degrees of color identification vary from person to person.

How they may interact:

  • strategies to compensate may involve asking for help to distinguish colors

Developer considerations:

  • color should not be the only way to relay important information (WCAG 1.4.1)

Day 84: Strategies and Techniques for Fixing A11y Issues

Eighty-four (84) days in, and I made it to the end of the WAS Body of Knowledge (BOK)! I’m ready to go back through all my blog posts (journaling) to review things that I’m so nervous about forgetting by April. This will include pouring over the W3C’s Web A11y Evaluation Background Reading materials. I’ll also spend the last few weeks of these 100 days to work through the Deque courses that apply to this certification, and tie together ideas that will help me be a better Web Accessibility Specialist (WAS) in practice.

Today’s study session felt less productive, due to the topic implying review and application of all the things learned to be a WAS. However, the day did not go by without some positive steps toward taking the exam.

Things I accomplished

  • My request to take the WAS certification exam was accepted today, so I registered for the exam.
  • Read over the last section of the BOK: Recommend strategies and/or techniques for fixing accessibility issues.
  • Continued further through the Deque Accessibility Basics course.

What I learned today

A specialist or expert in web accessibility should have a solid understanding of:

  • how to evaluate web content using WCAG 2.0,
  • accessible web design,
  • web technologies,
  • assistive technologies,
  • how people with different disabilities use the Web,
  • accessibility barriers that people with disabilities experience,
  • assistive technologies and adaptive strategies that people with disabilities use, and
  • evaluation techniques, tools, and methods to identify barriers for people with disabilities.

Additionally, I think this person needs to bolster their project management and communication skills. Not only will they know what they’re talking about, but help educate and encourage the people they are helping with evaluation and remediation. A teacher and project manager, of sorts. Accessibility Pro Certified: To Be or Not To Be is a wonderful article that takes into consideration the idea of certification and what makes an accessibility pro or expert.

In order to recommend remediation strategies, a specialist has to understand:

  • how to create accessible content (the first major section of the BOK)
  • identify accessibility issues (the second major section of the BOK)
  • wisely choose an appropriate remediation technique that fits the goals and limitations an organization is working within (the third major section of the BOK)

This last study topic section in the BOK made me reflect back on all the considerations that go into prioritizing remediation, which often comes down to a balance of user and business impact. It circles back nicely to the start of the BOK, where I need to fully understand what accessible content is and how inaccessible content impacts users with disabilities.

 

Day 83: Prioritizing Remediation of A11y Issues

During today’s study session, I walked away with a lot of new-to-me information and useful steps to apply to my current work.

Things I accomplished

What I learned today

When starting an accessibility remediation project, start with a site’s core functionalities. Determine the issue’s origin (markup, style, functionality), then prioritize accessibility issues by severity of:

  • impact on users: does the problem have a user workaround or are they completely inhibited (blocked) from using a core functionality?
  • legal risk: related to user impact; is it a legal risk (based on functionality block and type of organization) or just a usability issue? take note of perceivability and repeat offenders
  • cost benefit: is the ROI greater than the time invested to remediate or lawsuit that may occur?
    e.g. ROI = ((Risk Amount – Investment) / Investment) * 100
  • level of effort to remediate (impact on business): how many changes (and where) have to be made?

WCAG conformance levels and success criteria are not the way to determine priority of remediation.

As mentioned in my notes about manual versus automated testing tools, it’s always best to target low-hanging fruit to begin quickly resolving issues.

When receiving an audit to proceed to remediation, people want to know:

  • where the problems are
  • what the problems are
  • how to fix them
  • not the specific technical guidelines and success criteria

Remediation is a hard lesson to learn in realizing that if things are made accessible from the start, less time and money is wasted.

Time is money. Just because you save time taking down inaccessible materials, time is added (technical debt shifted) to help desk lines or other resources.

I really liked Michigan State University’s accessibility severity scale:

  1. Level 4, Blocker: Prevents access to core processes or many secondary processes; causes harm or significant discomfort.
  2. Level 3, Critical: Prevents access to some secondary processes; makes it difficult to access core processes or many secondary processes.
  3. Level 2, Major: Makes it inconvenient to access core processes or many secondary processes.
  4. Level 1, Minor: Makes it inconvenient to access isolated processes.
  5. Level 0, Lesser: Usability observation.

Remediation procedure levels by Karl Groves:

  • simple: prioritization: time versus impact (user-centric)
  • advanced prioritization: scoring business and user impact (broken down by user type)
    (Impact + Repair Speed + Location + Secondary Benefits) * Volume = Priority

Best quote from today’s Deque course

Accessibility does not happen by accident. It has to be purposefully planned, built, and tested for accessibility.

Day 82: Testing with Users with Disabilities

Today’s study session led me back to usability testing. This seems to be critical when it comes to adding it to our testing toolbox to check for usability and accessibility issues that escape conformance checks, alongside automated and manual testing tools.

Personally, this is one of the areas I struggle with implementing. I love reading about usability testing and case studies that people document, but I’ve not yet taken the opportunity to try doing this myself. Usually because it has it’s own added cost, as well as awkwardness to set up testing with a specific group of people. Maybe this will be my motivator to make some connections and start a plan to make this happen this year.

Things I accomplished

Read:

What I learned today

The guidelines are not all-inclusive. Some good accessibility techniques may not be in WCAG because:

  • It is difficult to objectively verify compliance with the technique
  • The writers of the guidelines did not recognize the need for the technique when writing the guidelines.
  • The technique was not necessary (or at least not anticipated) at the time the guidelines were written, because the technologies or circumstances that require the technique are newer than the guidelines.

Before bringing in users for testing, do some preliminary checks and fix known issues in order to better discover underlying accessibility and usability challenges that were not detectable by software or manual checks.

Including users in testing doesn’t have to be a full-blown usability study. Informal evaluations and brief interactions with feedback can be very helpful. Additionally, informal evaluations can happen throughout the product’s lifecycle, rather than formal usability studies that usually occur near the end of development. Bonus: informal interactions can help us all see the person clearer rather than a case study.

Never assume that feedback from one person with a disability speaks for all people with disabilities. A small-scale evaluation (only a few people within a study) is not enough to draw solid conclusions with statistical significance, even though valuable insight occurs. Try to include a variety of disabilities: auditory, cognitive, neurological, physical, speech, and visual with different characteristics. If possible, include older people, as well.

Further reading

 

Day 81: Manual vs. Automated A11y Testing Tools

Today I went into my study time with the intent to list out pros and cons of automated versus manual accessibility testing. Instead I walked away with a comparison of what each had to offer, and understanding that both are valuable when used cooperatively during website and web app development.

Things I accomplished

Submitted my request to take the Web Accessibility Specialist certification exam in early April via private proctor.

Read:

Created a comparison table to jot down ideas about manual and automated testing (see under What I learned today).

What I learned today

Manual Testing Automated Testing
Slower process Faster process
Mostly accurate Sometimes accurate
Easier to miss a link Guaranteed check of all links
Identifies proper state of elements Automated user input can miss state
Page by Page Site-wide
Assurance of conformance Misleading in assurance of conformance
Guidance for alternative solutions Yes/No (boolean) checks and solutions
Human and software Software
Context Patterns
Finds actual problems Lists potential problems
Appropriate HTML semantics HTML validation
Accurate alt text Existence of alt attribute
Heading hierarchy Headings exist
Follows intention of usability Follows WCAG success criteria
Test is/isn’t readable Programmatic color contrast
Exploratory Automated
Part of the testing process Part of the testing process
Appropriate use of ARIA Presence and validity of ARIA
In real life Hypothetical
Identifies granular challenges of usability Quickly identifies low-hanging fruit and repeated offenders

In conclusion

Deciding on testing methods and tools shouldn’t be an either-or mandate. Each has their strengths and weaknesses. Using both methods should be a part of every testing process. Why not strengthen your product’s usability by incorporating tools from each methodology into your process?

Day 80: Manual A11y Testing Tools

Yesterday I browsed through automated accessibility testing tools. Today, per their mention in the WAS Body of Knowledge, I discovered some manual accessibility testing tools that offer more insight into problems that can’t be caught in automated reports. These tools go beyond the easy checks, like color contrast, headings, and keyboard access, that I’m used to checking for.

Tomorrow I hope to dig in a bit deeper to compare the difference between automated and manual testing, along with the drawbacks of each.

Things I accomplished

What I learned today

Manual testing tools, much like automated testing tools, offer reports and automated tests for all audiences within the development process to get a start on addressing accessibility issues. The advantage that manual tooling provides is that it offers additional guidance and education to fix problems that cannot be systematically evaluated through automated checkpoints. However, no tooling replaces human judgement and end-user testing.

Manual testing tools can include:

  • guided manual testing and reports, based on heuristics (WorldSpace Assure)
  • browser inspector tools and add-ons (accessibility audit in Chrome DevTools)
  • accessibility API viewers (Accessibility Viewer views the a11y tree)
  • simulators (No Coffee visual disabilities simulation)
  • single (heading levels) and multi-purpose (many checkpoints) accessibility tools

Another observation about manual testing tools, they may take more time to work through results, but there are many more of these tools that are free to use compared to automated full website testing.

Though I found that many manual testing tools seem to fall with between the development and testing, there are some system-wide tools that help earlier on in the life cycle. Color Oracle is one such application that can assist designers during the earlier design process before any code is written. It takes colorblindness into consideration at the beginning of the site’s life cycle.

An Aside

Ran across an accessibility basics article by Microsoft, and loved this catchphrase:

“Accessibility is a built-in, not a bolt on.”

Day 79: Automated A11y Testing Tools

Moving onto another section in the WAS Body of Knowledge, quickly approaching the end. I’m postponing going over the “Test for End-user Impact” section in order to work through the “accessibility testing tools” section. The summary says it all for me:

“No accessibility software tool can find all the accessibility issues on a web site, but software tools can expedite the process of finding accessibility issues, and increase the overall accuracy when supplemented by a skilled manual evaluation of the same content.”

Or, as the Web Accessibility Initiative (WAI) sums it up:

“We cannot check all accessibility aspects automatically. Human judgement is required. Sometimes evaluation tools can produce false or misleading results. Web accessibility evaluation tools can not determine accessibility, they can only assist in doing so.”

Things I accomplished

What I learned today

I hadn’t considered this before, but not all tools are meant to target one audience (developers). Each tool is created with a specific audience in mind, whether it be:

  • designers,
  • developers,
  • non-technical content authors,
  • quality assurance testers, and
  • end-users

There are SO many options. How intimidating for anyone trying to decide what software, plug-in, or consultant to use!

Automated testing involves different considerations based on audience, need, conformance standard and level, site complexity, and accessibility experience. Various types of automated testing include:

  • site-wide scanning and reporting (SortSite, Tenon.io, AMP)
  • server-based page analysis from one page to entire site (Cynthia Says, SiteImprove)
  • browser-based developer/QA plug-ins that evaluate one page at a time (WAVE, AInspector)
  • unit testing during development (aXe API)
  • integration testing before deployment (aXe API)

It strikes me that using a combination of tools with differing purposes could help speed up the process and ensure accuracy even more. By no means, would they replace manual checks and end-user testing, but it’s incentive to not pick just one tool to do a job meant for several tools.

 

Day 78: Learning about Orca

Orca is an open source screen reader for Linux. This is my first time to read about it. Hopefully, I’ll have a chance to actually experiment using it. However, I’ll need to set up a Linux distribution that works with it first.

Things I accomplished

  • Attempted to install Orca on my netbook (Lubuntu), and then on my Raspberry Pi (Raspbian). Both failed attempts (today, anyway).
  • Read through a lot of Orca documentation.
  • Added keystrokes to my screen reader cheatsheet, and copied the spreadsheet over to my WAS cheatsheets on Google Sheets.

What I learned today

  • Orca can provide speech or braille output.
  • Orca is provided as a default screen reader for several Linux distributions, including Solaris, Fedora, and Ubuntu.
  • Orca provides a really cool feature called Where Am I that allows additional commands to inform the user about page title, link information (location, size), table details, and widget role and name.
  • Many of the navigation keystrokes are similar to other desktop screen reader commands.
  • Orca also has commands specific to dealing with Live Regions on webpages.
  • When the “Super” key is referenced, it’s talking about the Windows logo key.
  • Orca provides Gecko-specific navigation preferences. I wonder if it works best with Firefox?

Not only did I learn about Orca, but I also got sucked down the Linux rabbit hole in order to better grasp that OS, its distributions and desktop environments, and additional “universal access” for people with disabilities. However, that topic could take another week to work through.

Day 77: Experimenting with Window-Eyes

Window-Eyes is a screen reader that appears to have fallen out of the mainstream use. According to WebAIM’s latest Screen Reader Survey, 1.5% of their respondents reported that they use Window-Eyes. I experimented with it because it was listed as an example of an assistive technology to experiment and test out.

Things I accomplished

What I learned today

  • Window-Eyes works best with Internet Explorer.
  • Window-Eyes was folded into the AI Squared family, and there are instructions on how to migrate from Window-Eyes to JAWS (mp3).
  • Window-Eyes is free to download if you have a registered copy of Microsoft Office.
  • Most keystrokes are similar to other Windows screen readers, but uses Control or Insert keys as modifier keys.

Day 76: Screen Reader Keystroke Comparisons, Part 2

Continued work from Screen Reader Keystroke Comparisons, Part 1. Tomorrow I hope to dive into one of the other screen readers that I’m less familiar with (either Windows Eyes or Orca).

Thing I accomplished

  • Added VoiceOver (Mac), VoiceOver (iOS), and Talkback keystrokes/gestures to my (offline) comparisons spreadsheet.

What I learned today

  • For VoiceOver on Mac, Control + Option + Command + X navigates to the next list on a page.
  • Noticed for the first time, the Talkback cheatsheet for Android [PDF] devices recommends using the Firefox browser. I assumed Chrome or the proprietary browser on the device.
  • It’s kind of a nerdy fun to actually see the keystroke and gesture differences and similarities next to each other, which is helping me differentiate what works on what device.

Day 75: Screen Reader Keystroke Comparisons, Part 1

Sporadically, I’ve used some of my study time to test out using assistive technologies (AT) like screen readers, speech recognition, and high contrast mode. I’m circling back to AT because I’ve hit the section in the WAS Body of Knowledge that stresses testing with AT in order to better understand how people who use AT may experience your website. This will be a fun week for me because I enjoy trying out AT and broadening my perspective to how users encounter webpages.

Thing I accomplished

  • Added NVDA, JAWS, Narrator, and VoiceOver (Mac) keystrokes to a new (offline) comparisons spreadsheet I’ve started working on.

What I learned today

  • NVDA and JAWS have many similar keystrokes and shortcuts, although I’m not sure why NVDA uses “D” for going to the next region, when JAWS uses “R” which is easier to remember.
  • Oh! Deque has a cheatsheet for JAWS Keyboard Shortcuts for Word. I’ll have to take a closer look this week.
  • JAWS has several different cursors to toggle between, dependent on context.
  • Narrator has a specific mode for developers to use during testing.
  • Keyboard accessibility is not enabled by default on a Mac. Accessibility and screen reader test results will be inaccurate if you do not enable keyboard accessibility in the following two places:
    1. System Settings: Keyboard > Shortcuts > Full Keyboard Access > All controls
    2. Safari Settings: Advanced > Accessibility > Press Tab to highlight each item on a webpage.
  • Switching between Mac and Windows keystrokes just feels awkward. Imagine a screen reader user switching operating systems!

Recent interesting A11y articles

Day 74: Striving for WCAG Level AAA, Part 3

Today I spent time with the Understandable Level AAA criteria. Some of the Readable Guideline’s success criteria (SC) seem like a nice effort to make for everyone who comes to your site. However, I can see how that can take a lot more communication between content creator and developer to provide the necessary additional content or appropriate code to implement sufficient techniques correctly.

Things I accomplished

  • Read Understandable’s Level AAA success criteria on How to Meet WCAG 2 site.
  • Mapped success criteria to failures of those criteria.

What I learned today

There are 7 Level AAA criteria under Understandable, none of which are new to 2.1:

  • 3.1.3 Unusual words
  • 3.1.4 Abbreviations
  • 3.1.5 Reading level
  • 3.1.6 Pronunciation
  • 3.2.5 Change on request
  • 3.3.5 Help
  • 3.3.6 Error prevention (all)

There are 0 Level AAA criteria under Robust.

Examples of Understandable Level AAA failures

SC 3.1.3 Unusual words Fail: Specialized words are used in the content, but no definitions are provided. Providing definitions or a glossary would greatly benefit people with cognitive, language, and learning disabilities.

SC 3.1.4 Abbreviations Fail: Abbreviations are present, but no expanded form is available on the page, in assumption that the user already knows what it means. This can create confusion for people with cognitive issues or those using a screen magnifier (which challenges contextual cues).

SC 3.1.5 Reading level Fail: No summary or additional visuals are provided for reading level below 8th grade. This may inhibit people with reading disabilities or English as a second language from understanding your content.

SC 3.1.6 Pronunciation Fail: No audio or text pronunciation is provided for difficult words. This hinders people who use screen readers and people with reading disabilities from fully understanding the content provided.

SC 3.2.5 Change on request Fail: A new window opens when the user clicks on a link without the user expecting the change of focus to a new window. Related to SC 3.2.1 and SC 3.2.2. This greatly affects people who use screen readers.

SC 3.3.5 Help Fail: A form supplies no detailed instructions about data format or additional information necessary to submit the form. Leaving out required information and helpful tips can hinder people with visual, cognitive, or motor disabilities form submitting the form correctly.

SC 3.3.6 Error prevention (all) Fail: A form that is not a legal, financial, or data transaction fails to provide a reversible, review, or confirmation step when error is possible. Related to SC 3.3.4. This may affect people with reading or motor disabilities.

Day 73: Striving for WCAG Level AAA, Part 2

Carrying on from Part 1, today I spent time with the Operable Level AAA criteria. There are a few, I feel, that should be on a lower conformance level, as well as best practice for web designers.

Things I accomplished

  • Read Operable’s Level AAA success criteria on How to Meet WCAG 2 site.
  • Mapped success criteria to failures of those criteria.

What I learned today

There are 12 Level AAA criteria under Operable:

  • 2.1.3 Keyboard (no exception)
  • 2.2.3 No timing
  • 2.2.4 Interruptions
  • 2.2.5 Re-authenticating
  • 2.2.6 Timeouts (new in 2.1)
  • 2.3.2 Three flashes
  • 2.3.3 Animation from interactions (new in 2.1)
  • 2.4.8 Location
  • 2.4.9 Link purpose (link only)
  • 2.4.10 Section headings
  • 2.5.5 Target size (new in 2.1)
  • 2.5.6 Concurrent input mechanisms (new in 2.1)

Examples of Operable Level AAA failures

SC 2.1.3 Keyboard (no exception) Fail: Div element has been scripted to be clickable, but is not identified as a link to assistive technology. This makes custom page navigation more difficult for people who use screen readers or voice input.

SC 2.2.3 No timing Fail: Web app forces a time limit on the user. This can make use of the web app harder for people with cognitive or motor impairments.

SC 2.2.4 Interruptions Fail: User is not given an option to delay or request non-emergency updates. Interruptions can be disorienting for a person who uses a screen reader when their cursor focus is forced elsewhere.

SC 2.2.5 Re-authenticating Fail: Form data is not saved when the user is timed out of a session. This can be frustrating for people who have motor or cognitive impairments.

SC 2.2.6 Timeouts (new in 2.1) Fail: Users are not explicitly warned about data loss due to inactivity time limits.

SC 2.3.2 Three flashes Fail: A video on the page contains three or more flashes in a 1-second period. This can greatly affect people who are prone to seizures.

SC 2.3.3 Animation from interactions (new in 2.1) Fail: Decorative animation that occurs when a user interacts with an element does not have a “reduce motion” option for users to turn it off. Animation can cause nausea for people with vestibular disorders and distraction for people with attention disorders.

Side note: This concept introduced me to the Reduced Motion media query. Before today, I didn’t know that existed!

SC 2.4.8 Location Fail: Users page location within that website is not evident. Breadcrumbs, sitemap, nor highlighted navigation bar has not been provided. This affects people with attention disabilities (as well as everyone else).

My two-cents: This seems like it should be good practice and common courtesy to help everyone find their way around your site. I had no idea it was Level AAA and have considered using breadcrumbs as part of business as usual when I build more pages for large sites.

SC 2.4.9 Link purpose (link only) Fail: Generic and unclear text is provided within hyperlinks. This can be confusing for people who use screen readers and voice input, as well as those with cognitive disabilities.

SC 2.4.10 Section headings Fail: Text-heavy content and instructions are provided on a page without any additional headings to segment sections of text for clarity. This greatly affects people with visual and cognitive disabilities.

Confession: This is a bit of a pet peeve of mine, and I often feel this should be part of best practice for web designers. It especially irritates me when I see lists nested within lists, but not headings are provided to chunk and clarify content areas that are text-rich. Please make your page more navigable for everyone by including section headings.

SC 2.5.5 Target size (new in 2.1) Fail: Interactive buttons and customized links are less than 44×44 pixels. This can make it especially hard for people with motor disabilities to activate the button or link.

Added note: this is already making its way into mobile web app best practice, which is where I was first introduced to this concept. I’m happy to see it considered as an accessibility issue, too.

SC 2.5.6 Concurrent input (new in 2.1) mechanisms Fail: Interactive components on a page restrict input from other input devices other than touch. Therefore, this restricts who uses that page because people of all abilities use their input device (or devices) of choice.

Day 72: Striving for WCAG Level AAA, Part 1

Originally, I thought I was going to do Level AAA conformance in one sitting. Turns out there were 28 additional criteria! To better digest the techniques and failures associated with these criteria, I had to break that reading up into three study sessions, given I spend approximately 1.5 hours a day studying. Not to mention, I find myself falling down a documentation rabbit hole to learn more about points I’m extremely curious about.

Things I accomplished

  • Read Perceivable’s Level AAA success criteria on How to Meet WCAG 2 site.
  • Mapped success criteria to failures of those criteria.

What I learned today

There are 9 Level AAA criteria under Perceivable:

  • 1.2.6 Sign language (prerecorded)
  • 1.2.7 Extended audio (prerecorded)
  • 1.2.8 Media alternative (prerecorded)
  • 1.2.9 Audio-only (live)
  • 1.3.6 Identify purpose
  • 1.4.6 Contrast (enhanced)
  • 1.4.7 Low or no background audio
  • 1.4.8 Visual presentation
  • 1.4.9 Images of text (no exception)

Examples of Perceivable Level AAA failures

SC 1.2.6 Sign language (prerecorded) Fail: Sign language is not provided along with a prerecorded video. This SC benefits deaf who rely on American Sign Language as their first language.

SC 1.2.7 Extended audio (prerecorded) Fail: Movie moves to quickly to accommodate synchronized audio description. Video needs to be extended for substantial audio description. This SC enables blind and visually impaired to better understand what the movie is conveying.

SC 1.2.8 Media alternative (prerecorded) Fail: Full text transcripts of equal experience for a prerecorded video are not provided  This SC benefits people who have a combination of visual and auditory impairments.

Confession: I often confuse captions and transcripts. They are not the same and do not provide the same access to everyone. Ultimately, captions are the bare minimum. Transcripts are more inclusive.

SC 1.2.9 Audio-only (live) Fail: No text alternative, like live captioning, was offered during a live audio performance. This would benefit people with auditory

SC 1.3.6 Identify purpose Fail: Landmarks are not identified across the page. This inhibits the adaptability of the page to meet the needs of those with cognitive disabilities. Related to SC 4.2.1.

SC 1.4.6 Contrast (enhanced) Fail: Contrast of foreground text/images over background does not meet the 7:1 ratio. This enhanced criterion makes it easier for people with low vision or colorblindness to perceive content. Related to SC 1.4.3.

SC 1.4.7 Low or no background audio Fail: An audio track that is embedded on the page that contains a speech with background music doesn’t offer the user the ability to turn off the background music. This criterion is meant to enhance the distinguishability of that speech for people who are hard of hearing.

SC 1.4.8 Visual presentation Fail: Paragraph text is justified. This presents reading challenges to those with reading, cognitive, or visual disabilities.

Confession: I strongly feel that SC 1.4.8 could easily be pushed into Level AA, so more people would strive for better visual presentation for all users, since it strikes me as just good practice and even common courtesy for everyone. I mean, who doesn’t benefit from left-aligned text, greater line spacing, shorter paragraph width, a choice or foreground/background colors, and elimination of horizontal scrolling of text??

SC 1.4.9 Images of text (no exception) Fail: Images of text (not including the brand logo) are present throughout the site. People who zoom or magnify their screen would benefit greatly from the elimination of all images with text. Related to SC 1.4.5.

Off topic but interesting

Today I learned that there are over 120 HTML elements from 4.01 on up to 5.2. Then there are all those attributes to go along with them. This huge inventory just confirms to me that more developers should spend more time with the “basics” of HTML. Some parts of Level AAA could be more achievable (i.e. SC 1.3.6) if people spent more time understanding the building blocks, along with CSS, rather than focusing on the “perfection” of JavaScript and other programming languages to get the front-end job done.

Day 71: How to Fail WCAG Level AA, Part 2

Continuing on from Part 1 with how to pass or fail WCAG Level AA…

Things I accomplished

  • Read Operable, Understandable, and Robust success criteria (Level AA) failure techniques on How to Meet WCAG 2 site.
  • Mapped success criteria to failures of those criteria that I’ve encountered or read about.

What I learned today

There are only 3 additional operable success criteria:

  • 2.4.5 Multiple ways
  • 2.4.6 Headings and labels
  • 2.4.7 Focus visible

There are only 5 additional understandable success criteria:

  • 3.1.2 Language of Parts
  • 3.2.3 Consistent navigation
  • 3.2.4 Consistent identification
  • 3.3.3 Error suggestion
  • 3.3.4 Error prevention (legal, financial, data)

There is only 1 additional robust success criterion, which was added in 2.1:

  • 4.1.3 Status messages (new to 2.1)

Examples that fail base conformance

Operable

SC 2.4.5 Multiple ways Fail: Not enough options to explore the website for people who have visual or cognitive disabilities. Provide alternative ways to navigate via search, sitemap, or table of contents.

SC 2.4.6 Headings and labels Fail: No section titles have been provided for a text/content-heavy page. This eases navigation for people with visual, motor, and cognitive impairments (and everyone else).

SC 2.4.7 Focus visible Fail: Focus outline was removed for input controls. This impacts keyboard-only users who need to see their navigation points throughout the page.

Understandable

SC 3.1.2 Language of Parts Fail: Paragraph and span elements do not contain a lang attribute when a change of language is obviously present. This impacts understanding for anyone, but especially impacts screen reader users and browsers that need explicit identification of document and parts of text language. Related to SC 3.1.1.

SC 3.2.3 Consistent navigation Fail: Global navigation links are not consistent in presentation and order across the website. This can be confusing for anyone, but especially impacts people with cognitive impairments.

SC 3.2.4 Consistent identification Fail: Components with the same function are not consistently labelled across the website, which impairs recognition of similar tasks and functionality. Related to SC 1.1.1 and SC 4.1.2. This can affect people with visual or cognitive impairments.

SC 3.3.3 Error suggestion Fail: No description of required input error is provided client-side nor server-side. Related to SC 3.3.1 and SC 3.3.2. This can affect people with visual or cognitive impairments.

SC 3.3.4 Error prevention (legal, financial, data) Fail: During an online data transaction, the user is not given an opportunity to review, correct, or reverse the form data being submitted. Especially on multi-step forms that requires a few pages, this can affect people with cognitive disabilities.

Robust

SC 4.1.3 Status messages (new to 2.1) Fail: A role=”status” was not assigned to a search update that returned x-amount of results. Role needs to be assigned to the status update of how many results found. This can greatly impact visually impaired who use screen readers.

You can quickly find what you need

Last night (in bed, of course), I was thinking about how the WCAG Quick Reference could use some filtering options, like tags, in order to quicken my information discovery. Well, someone was already thinking of this! They have many filters, including WCAG version, tags, conformance levels, techniques, and technologies, set up to help people find what they’re looking for:

Filter tab open revealing options like version, tags, levels, and techniques.

I should have explored this option earlier in my week!

In conclusion

9 more success criteria across three principles is not too much more to ask of designers and developers. To reinforce that idea, I don’t find these nine criteria especially hard to implement.