Day 38: Accessible Custom Widgets Overview

Struggling a bit as to what exactly I need to be studying when it comes to accessible techniques (beyond browser and assistive technology compatibility), I’m moving onto the next WAS Body of Knowledge topic “create interactive controls/widgets based on accessibility best practices.” I feel mostly familiar with the topic I researched yesterday, and I feel like getting back into code and doing some testing myself will help solidify more knowledge. In reality, the interactive controls/widgets concept will circle back around to the “accessible JavaScript, AJAX, and interactive content” and ARIA sections, but will offer me more context and application to this head knowledge.

And, to be honest, after all this reading and Googling, I’m ready to jump back into coding with CodePen, as well as my local environment.

Things I accomplished

In review

What I learned through researching accessible interactive content, I need to:

  • manage focus
  • use semantic HTML
  • keep content perceivable at all times
  • create device-independent events
  • consider DOM order when adding content dynamically
  • simplify events

Basic keyboard interactions with a webpage:

  • Tab navigates to next focusable element
  • Shift+Tab navigates to previous focusable element
  • Arrows navigate between related radio buttons, menu items, or widget items
  • Enter activates a link or button, or submits a form
  • Space activates a button or toggle
  • Esc closes menus, modals, and other popover variations

Remember… W3C has a thorough document on ARIA authoring practices, including an extensive section on Design Patterns and Widgets.

What I learned today

  • The Tab key should focus on the widget; the arrow keys should navigate within the widget.
  • When a custom role is assigned to an element, the custom role completely overrides the native role.
  • When creating ARIA widgets, pay attention to the semantic structure of the roles. Some roles have required parent or child roles, or required attributes.
  • Role=”application” should be used sparingly. It overrides many AT keystrokes, such as ones that allow screen reader users to navigate by headings, landmarks, and tables.
  • A screen reader has two modes it uses to access a website:
    • focus mode
    • browse mode
  • Some of the MDN docs about ARIA share vital information about keyboard interaction, available states and properties, and effects on assistive technologies.

Related resource of the day

In Deborah Edwards-Onoro’s The State of the Web: Making the Web More Accessible, she offers her takeaways from a 30-minute video that Google produced about the importance of web accessibility. It’s a quick read when you’re crunched for time, and a good reminder of just how important accessible websites are. My favorite was takeaway #1: “The more accessible your website is, the more usable it is to everyone.”

Day 37: How Well Do Browsers Play with Assistive Technologies?

This week I’m moving into the WAS Body of Knowledge section “choose accessibility techniques that are well-supported”. Most of these topics I’ve had some experience with and even preached about myself. For instance, adhering to coding standards and building with progressive enhancement in mind are two concepts I firmly believe can eliminate a lot of problems. I also understand that testing across platforms, browsers, and assistive technologies is important in order to discover what unanticipated barriers might occur, despite coding to standard.

That being said, today I focused on learning about what combinations of browsers and assistive technologies have been tested to work the best together. I know a bit about screen reader and browser combinations, but I’m certain there is more to learn than the base knowledge I have.

Things I accomplished

What I learned today

Here’s what I learned:

  • VoiceOver (VO) on macOS works mostly well with Firefox, but VO used with Chrome has limited support. Naturally, VO works best with Safari.
  • On that note, only Talkback works the best with Chrome. Other screen readers experience limited support or some support with exceptions. Oddly enough, even ChromeVox has some exceptions.
  • Edge does not support Dragon or ZoomText, and yet Internet Explorer (IE) does. As a matter of fact, IE is recommended for use with these two technologies.
  • Edge has the most support (with exceptions) for Narrator.
  • JAWS has been recommended for a while to be used with IE, but Firefox is a second close as of recently.
  • NVDA still plays best with Firefox.
  • Firefox and IE differ in visual focus, so both should be tested for this.
  • Likewise, video and audio elements differ across browsers, so those should be tested across browsers, too.
  • IE and Firefox are the only browsers that support Flash and Java accessibility.
  • ChromeVox uses to DOM to access content for the listener, rather than other screen readers that access an accessibility API or combination of API and DOM.
  • Level Access has a wiki on iOS Accessibility Issues.
  • SAToGo is another screen reader that works on Windows.

One of my favorite resources is canisue.com when checking for support across browsers. Choice of elements can really matter in cases where IE doesn’t support all HTML5 elements, including dialog. This resource alone has taught me so much about browser support for standards as I’ve worked through projects. In this vein, HTML5 Accessibility is another useful site.

One thing to remember is that following standards (like WCAG) are your best bet. Aiming for specific AT or browser support is not a good approach since updates can be made and support between the two can change.

Note: when reading through the Level Access wiki about AT support by browsers, these were for most popular browsers. Other browsers like Opera were not mentioned.

Day 36: Questions to Ask throughout the Product Life Cycle

Today wrapped up my research for the week about integrating accessibility into a product’s life cycle. I ended with reviewing what a product’s life cycle looks like, and how everyone can play a role during each phase to ensure accessibility is considered throughout development, rather than an afterthought.

Things I accomplished

What I learned today

Stages of a product’s lifecycle with accessibility questions to ask at each stage:

  • concept: does it solve a problem for people with disabilities? what are the different user needs?
  • requirements: what accessibility standards/laws does it need to follow?
  • design: do the mockups create any barriers?
  • prototyping: does the prototype create any perceivable, operable, or understandable errors?
  • development: is the code following standards? are appropriate patterns being used?
  • quality assurance (QA): are automated and manual accessibility checks being run?
  • user acceptance testing (UAT): does this product work for real users with disabilities?
  • regression testing: when updates are made, are checks still passing?

Day 35: A11y Verification Testing as Part of User Testing

Today’s mission was to reflect on user (usability) testing, and search for accessibility verification testing (AVT). AVT was new to me, and I had a harder time finding that exact word combination when searching the web. So, I ended up reading about accessibility user testing and manual testing a developer can perform on her own to emulate user testing.

Things I accomplished

What I learned today

  • Accessibility testing is subset of usability testing.
  • User testing for accessibility requires recruiting real users with disabilities. The rest is much like general user testing:
    • observing them in an environment familiar to them,
    • assigning tasks to accomplish,
    • observing unspoken actions,
    • scrutinizing results, and
    • conclude what changes need to happen.

Cool resource

Designing for Guidance (Microsoft) [PDF] offers tips about the varied learning styles that people have. With the approach of inclusive design in mind, this tiny booklet will make you think more critically when you develop a product or learning course for your large audience.

Day 34: TalkBack on Android

A diversion from my quality assurance research this week out of necessity of testing with an Android screen reader at work. Time spent today: 2 hours.

Things I accomplished

What I learned today

  • TalkBack wasn’t too different from my experience with VoiceOver gestures. Some minor gesture differences.
  • The equivalent functionality to VoiceOver’s rotor is the Local Context Menu.
  • TalkBack quick access can be updated to a triple-click of the Home button of my S5 to turn it on.
  • TalkBack keyboard events are not the same as touch events. It can be hard to develop for all TalkBack users (some keyboard users, some touch users).
  • 29.5% of respondents to WebAIM’s screen reader survey said they use TalkBack.
  • A two-finger or three-finger swipe navigates me through my multiple screens.
  • The “explore by touch” feature reads focusable items as I drag my finger around the screen.
  • Entering my PIN was easier for me to enter with TalkBack then it was with VoiceOver.

Day 33: A11y, UX, or Both?

Things I accomplished

What I learned today

Comparing accessibility and user experience, both have benefits for all, yet differ mostly by audience:

  • accessibility
    • audience: people with disabilities
    • intent: the targeted audience can perceive, understand, navigate, and interact with websites and tools (an equivalent user experience)
  • user experience
    • audience: anyone
    • intent: a product should be effective, efficient, and satisfying

Accessibility includes a more technical aspect (considerate of assistive technologies, for instance); UX is more principled in its approach.

Usable accessibility = a11y + UX.

Accessibility is just one aspect of the “universal web”.

Looking at accessibility a little closer, what makes a person disabled? We may think of someone with a disability as having a certified report by a doctor or proving an obvious physical or mental difference from our own. Yet disabilities are actually better defined as a conflict of a person’s ability with their environment. It puts us all on a spectrum, doesn’t it?

An accurate statement

“For people without disabilities, technology makes things convenient. For people with disabilities, it makes things possible.” – Judith Heumann, U.S. Department of Education’s Assistant Secretary of the Office of Special Education and Rehabilitative Services

Factoid resource

  • Section 255 of the Telecommunications Act of 1996: Fueling the Creation of New Electronic Curbcuts
    A timeline of IT innovations built for someone with disabilities, but made its way into mainstream tech use. To my surprise: the typewriter!

Day 32: Benefits of Designing for A11y

Things I accomplished

What I learned today

There are several benefits to starting out a design process with accessibility in mind, rather than catching it after production and resorting to remediation. Some of these benefits include:

  • a solid customer base due to an off-the-shelf accessible product,
  • saved money by building it right rather than redesigning over and over again,
  • minimized legal risk,
  • innovation and the challenge to solve real-world problems,
  • improved productivity,
  • improved diversity, and
  • improved corporate image and brand when accessible technologies and strategies are incorporated within their organization.

Day 31: A11y throughout a Product’s Lifecycle, Waterfall vs. Agile

Moving onto the next WAS Body of Knowledge study topic: integrating accessibility into the quality assurance process. Approximate study time: 1 hour.

Things I accomplished

What I learned today

Considering accessibility shouldn’t just happen at the mockup or code level. It can happen throughout the product’s entire life:

  • concept,
  • requirements,
  • design,
  • prototyping,
  • development,
  • quality assurance,
  • user testing, and
  • regression testing.

Additionally, I read about the agile and waterfall processes and when to apply accessibility when working through the cycle:

  • waterfall approach: present throughout each step and is well documented
  • agile approach: discussed at scrum meeting, established as a requirement, built into design and architecture, use standardized testing with TDD and automation

Love this statement by Karl Groves which nails it when it comes to encouraging the development of an accessible product rather than blockading it:

“Become a member of the team, not a gatekeeper, and you will be seen as a resource instead of a hurdle.”

Day 30: Reminders of the Who, Why, and How of A11y

After learning a bit this past week about principles and concepts to create JavaScript that is accessible, I used today as a chance to remind myself of who we are doing this for, why it’s helpful to them, and how we can strive to meet them where they are.

Spending an hour of my time today, I used IAAP’s Prepare for WAS online resources list to get me started.

Things I accomplished

What I learned today

  • Access to information and communications technologies, including the Web, is defined as a basic human right in the United Nations Convention on the Rights of Persons with Disabilities (UN CRPD).
  • Accessibility really involves the cooperative of several moving parts for it to work:
    • content
    • user agents (browsers, media players, etc)
    • assistive technology
    • user knowledge and experience
    • developers, designers, content creators
    • authoring tools, and
    • evaluation tools.
  • I’m very familiar with assistive technologies when it comes to users being able to access websites. However, learning about adaptive strategies was new to me, though it rang true. Adaptive strategies include increasing mouse size, turning on captions, and reducing mouse speed. These techniques are usually how a user adapt with more mainstream technology.

Day 29: Making Dynamic Content Perceivable

WCAG 2.1 Success Criteria (SC) 4.1.3 and 1.3.2 are just two good reasons that we, as designers and developers, should be mindful of how and where we add new content while a user is interacting with our website. Today I spent an hour to see how deep I could dig into the concept of making dynamic content on a page perceivable to people who use assistive technology. I didn’t get as far or learn as much as I’d hoped, but I have included in this post a few of the resources I found helpful during my search.

As an aside, one fun thing about this journey has been revisiting familiar websites and running across familiar names in the web accessibility circle.

Thing I accomplished

  • Searched for articles and videos about making dynamic content perceivable, as well as managing DOM order, which both seem to go hand in hand.

What I learned today

  • When adding or updating content, be sure it’s appended after the point of focus the user is at. That makes sense, as many users (not just screen reader users) will likely not go backward in the flow of content.

Resources

Day 28: Accessible JavaScript Events

Today I sought to learn about JavaScript events. Specifically, in the context of accessibility, I wanted to dig deeper into two ideas that can make or break the interaction of assistive technologies with websites and web apps:

  1. there should be no more than one event assigned to an element (some exceptions may apply)
  2. create device-independent event handlers

Thing I accomplished

Searched for articles and videos pertaining to proper use of event handlers to optimize accessibility.

What I learned today

As developers, we shouldn’t offer interaction with just one type of device or peripheral. Coding device-independent event handlers will open up the experience to a wider audience of users. We know to make our sites keyboard accessible, but we shouldn’t build just for keyboard users either. Examples of device-independent event handlers:

  • onFocus
  • onBlur
  • onSelect
  • onChange
  • onClick (when used with links or form elements)

The examples aforementioned are not unbreakable. Altering default behaviors can present problems.

While searching for articles, I was amazed to go back 10 years (or more!) on the topic of accessible JavaScript. My time in this field is still so fresh and new that I forget how long these conversations have been going on. A huge thank you to all who have initiated these conversations and built an education for the rest of us!

Resources

Day 27: Managing Focus and Logical Order

Properly managing focus, especially within web applications, is a key component to making JavaScripted web pages accessible. Determining logical order of code and components can be another challenge when it comes to web accessibility. Sites need to be coded thoughtfully so that the proper reading order of each section of the page is synchronous visually and audibly.

I learned a few things today that will not only make me a better accessibility specialist, but also a better developer.

Things I accomplished

What I learned today

  • There’s a ‘template’ HTML element for rendering content when called upon. Cool! Not supported by IE 11, of course.
  • JavaScript has a focus method to assist with focus on elements that do not naturally receive focus.
  • In the case of an SPA, don’t forget to change the page title alongside managing the focus of the pages element.
  • The position of the some “subscribe to newsletter” components can be problematic if not positioned appropriately in the DOM. I had to quickly evaluate this site, and was relieved that it didn’t have that problem.
  • Labeling (naming) techniques are not all treated equally. Importance is computed. In other words, if several techniques are combined, one could overwrite another. Examples of labels (names) that may overwrite one another, dependent upon importance:
    • aria-labelledby
    • aria-label
    • label
    • title
  • Reminder: not every person who uses a screen reader is a keyboard user; likewise, not every keyboard user is a person who uses a screen reader.

Resources

Day 26: Use JavaScript Appropriately (and For Good)

Study hiatus on Christmas Day. I was just having too much fun being with my family. Back at it today, despite the sudden onset of a head cold. Time spent studying: 1 hour.

Things I accomplished

What I learned today

  • Using languages appropriately is not only good practice, but also good accessibility. CSS was meant for visual design, and using it to make content dynamic can break accessibility. The same goes for JavaScript when used beyond what it was intended for. JavaScript is great for updating content, but server-side scripting should be used to help increase accessibility, security, and progressive enhancement, especially when it comes to implementing forms.
  • Discovered Nomensa has a YouTube channel with some very helpful videos about web accessibility. It’s only within the past month that I’ve learned about Nomensa.
  • Automatically submitting a form ‘onchange’ is stripping control from the user. Giving a user control of form submission is helpful to users with assistive technology, and anyone who may get confused about the sudden information update on the page.

The following ideas were not new to me, yet I appreciated the reminders, especially in context of web accessibility (not just usability):

  • JavaScript should be an enhancement, therefore, enhancing the experience and not being obtrusive to every user.
  • This quote really speaks to me, not just about JavaScript, but about accessibility in general. That’s why WCAG principles, guidelines, and success criteria were set in place. So that all designers, developer, etc. can understand the why and how of accessibility. “You can paint a picture with a paint-by-numbers kit, but you will have trouble explaining how the harmonies of the picture were achieved and if there is a special meaning in the use of a certain color.” – Christian Heilmann
  • I should pin this up in my cubicle: “The browser, its settings, and its functionality belong to the visitor, and are not yours to dictate or remove.” – Christian Heilmann
  • Essential markup should not rely on JavaScript. This feels like a hard lesson in 2018 with all our fancy web apps. Going back to my first bullet point, I can be reminded that using each language for what it was intended to do can help overcome this challenge. Ask yourself, “Does this script help visitors to reach a goal faster or overcome a problem, or is it just there because it is flashy or trendy?”
  • Take caution when you’ve about to break convention. You may be breaking a solid user experience, too.

Day 25: Introduction to Accessible JavaScript

Continuing on my course of study by following the WAS Body of Knowledge (BOK), I’ve moved onto Accessible JavaScript, AJAX, and interactive content for the coming week. The BOK offers a basic list of things to consider when writing dynamic content and code:

  • Manage focus
  • Use semantic HTML
  • Keep content and its changes perceivable
  • Create device-independent event handlers
  • Consider DOM order when adding new content dynamically, and
  • Simplify events.

However, this list isn’t exhaustive, and the BOK doesn’t go into greater detail about what I need to study for or be more knowledgeable about. It does encourage me that I don’t have to be a JavaScript expert to understand the concepts, principles, and strategies for creating accessible code and content.

All that being said, I’d like to get a handle on the basic concepts provided, learn from good examples of accessible JavaScript, and discover other strategies that could be important for the WAS exam and my future as a digital accessibility consultant.

Things I accomplished

I gave myself a bit of slack today since it’s Christmas Eve, and my family time is more important than my study habits during holidays. However, I did dedicate 45 minutes to remain consistent and get a jump on the next section of the BOK.

What I learned today

  • JavaScript is not inherently good or evil. Dependent upon the programmer, the use of JavaScript can create barriers or improve accessibility.
  • WCAG 2.0 requires that JavaScript, when enabled, must be accessible.
  • The Enter key doesn’t always trigger an onClick event if used on an non-link or non-control element (i.e. a div element). In those cases, the Enter or Spacebar will have to be detected for interaction.

 

Day 24: Better VoiceOver Practice

Today I came back to practice using VoiceOver (VO) a bit more, since I was struggling with it yesterday. More practice definitely gave me more confidence. It would take a week of consistent use for me to use VO more naturally with my laptop. That’s an aspiration for the near future.

Things I accomplished

  • Walked through all 22 steps of the built-in VoiceOver Quick Start tutorial.
  • Read Chapter 1, 2, and 6 of Apple’s VoiceOver Getting Started Guide.
  • Added keyboard shortcuts to my study spreadsheet.

What I learned today

  • VO has a Trackpad Commander option. This meant that I could use some of the same gestures on my MacBook Pro (MPB) trackpad that I use on my iPhone! This was an important discovery for me, offering me cross-device ease of use.
  • Control + Option + Spacebar selects my choice for interactive components like checkboxes, radio buttons, buttons, etc.
  • I finally got the hang of stepping in and out of different components and windows by using Control + Option + Shift + up/down arrows. For some reason, my brain struggled with this yesterday.
  • Control + Option + D gets me quickly into the dock of my MBP.
  • Control + Option + M goes directly to my MBP menu.
  • Control + Option + K opens keyboard help. When open this will explain what keys do when a key is pressed while holding down the Control + Option keys.
  • Control + Option + H + H opens up a Command help dialog, which lists all the different keyboard shortcuts for specific commands and tasks.
  • Web Spots is a generated list of areas of the current webpage based on VoiceOver’s interpretation of the page’s visual design.
  • Control + Option + ; (semi-colon key) locks the VO modifier keys so you don’t have to keep holding them for shortcut commands. This was a big deal to learn! It seemed ridiculous to keep holding down 2-4 keys at a time while pressing another key.
  • Control + Option + Shift + I creates a verbal overview of the page, including how many headers, links, landmarks, etc.