Day 100: WCAG and Motor Disabilities

Last official study day! I’ll continue to review my notes, WCAG, screen reader shortcuts, and work through Deque courses, but I will not feel obligated to post everyday after this. Summary of 100-day journey still to come.

A couple days ago, I covered WCAG and hearing impairments, so today I reviewed WCAG again to so how it benefits people with motor impairments that want to use the web.

Things I accomplished

What I reviewed

  • WCAG success criteria that benefit people with motor impairments;

What I learned from it

The following lists target WCAG success criteria that benefit people with motor impairments.

Level A

  • 1.3.2 Meaningful sequence
  • 2.1.1 Keyboard
  • 2.1.2 No keyboard trap
  • 2.1.4 Character key shortcuts (v2.1)
  • 2.2.1 Timing adjustable
  • 2.2.2 Pause, stop, hide
  • 2.4.1 Bypass blocks
  • 2.4.3 Focus order
  • 2.5.1 Pointer gestures (v2.1)
  • 2.5.2 Pointer cancellation (v2.1)
  • 2.5.4 Motion actuation (v2.1)
  • 3.2.1 On focus
  • 3.2.2 On input

Level AA

  • 1.3.4 Orientation
  • 1.4.13 Content on hover or focus (v2.1)
  • 2.4.5 Multiple ways
  • 2.4.7 focus visible
  • 3.3.4 Error prevention (legal, financial, data)

Level AAA

  • 2.1.3 Keyboard (no exception)
  • 2.2.3 No timing
  • 2.2.4 Interruptions
  • 2.2.5 Re-authenticating
  • 2.2.6 Timeouts (v2.1)
  • 2.5.5 Target size (v2.1)
  • 2.5.6 Concurrent input mechanisms (v2.1)
  • 3.2.5 Change on requesst
  • 3.3.6 Error prevention (all)

Day 98: WCAG and Hearing Impairments

Yesterday’s learning about how WCAG benefits people with visual impairments pushed me forward into more WCAG overview to see how it benefits people with hearing impairments (deaf and hard of hearing). Deafblind benefit from using the combined design techniques of visually impaired and hearing impaired.

Things I accomplished

What I reviewed today

  • Semantic structures that screen readers (and sometimes everyone):
    • links (WCAG 2.4.4, A & 2.4.9, AAA)
    • navigation between pages (WCAG 3.2.3, AA & 3.2.4, AA)
    • navigation on page
  • Navigation keyboard shortcuts for screen readers.
  • WCAG success criteria that benefit people with hearing impairments;

What I learned from it

Tips from Giles Colborne’s book Simple and Usable (as quoted in Web for Everyone:

  • simplicity is good science and good interface design
  • simple designs put complexity in its place
  • observe real people to learn what’s needed
  • designing for multiple devices supports accessibility

aria-describedby and aria-labelledby WILL access content that is inside a container hidden using aria-hidden="true".

aria-labelledby, aria-describedby, aria-label, and hidden text are some ways to let a screen reader user know of current page. Or use aria-current=”page” which has some support.

The following lists target WCAG success criteria that benefit people with visual impairments. The main concern for hard of hearing is to provide text or sign language alternatives to any sound provided.

Level A

  • 1.1.1 Non-text alternatives
  • 1.2.1 Audio-only & Video-only
  • 1.2.2 Captions (pre-recorded)
  • 1.2.3 Audio description or media alternative (pre-recorded)
  • 1.3.3 Sensory characteristics

Level AA

  • 1.2.2 Captions (live)

Level AAA

  • 1.2.6 Sign language (pre-recorded)
  • 1.2.8 Media alternatives (pre-recorded)
  • 1.2.9 Audio-only (live)

Day 97: WCAG and Visual Impairments

Yesterday’s learning about how WCAG benefits people with cognitive disabilities inspired me to look over WCAG again to see how it benefits people with visual impairments (blind and low vision).

Things I accomplished

What I reviewed today

  • Semantic structures that screen readers (and sometimes everyone):
    • page title (WCAG 2.4.2, A)
    • page and parts language (WCAG 3.1.1, A & 3.1.2, AA)
    • landmarks (WCAG 4.1.1, A)
    • headings (WCAG 1,3,1, A & 2.4.6, AA & 2.4.10, AAA)
    • links
  • Navigation keyboard shortcuts for screen readers.
  • WCAG success criteria that benefit people with visual impairments;

What I learned from it

The support among screen readers is better for the simple two-letter language codes (like “en” for English) than for the localized language codes (like “en-au” for Australian English).

Screen readers list forms only if marked as role="form" (the <form> element will be ignored in landmark lists).

The name of a link is calculated as follows (in order of precedence by screen readers):

  1. aria-labelledby
  2. aria-label
  3. Text contained between the opening <a> and closing </a> elements (including alt text on images)
  4. title attribute (note that this is considered a last resort method for screen readers to find something; it should not be considered a primary technique for giving names to links)

If headings have images, the alt text will be show up in the headings list. Linked images (whether HTML img or CSS background image) can be assigned aria-label or aria-described by. Spans can hide extra meaningful content for screen readers. All these alternatives make me wonder how that impacts people with cognitive disabilities or people who use speech recognition. It’s so important to design for more than one disability.

There are so many success criteria for this disability, compared to cognitive disabilities, but I imagine that’s because it is more objective and measurable. The following lists target WCAG success criteria that benefit people with visual impairments.

Level A

  • 1.1.1 Non-text alternatives
  • 1.2.1 Audio-only & Video-only
  • 1.2.3 Audio description or media alternative (pre-recorded)
  • 1.3.1 Info and relationships
  • 1.3.2 Meaningful sequences
  • 1.3.3 Sensory characteristics
  • 1.4.1 Use of color
  • 1.4.2 Audio control
  • 2.1.1 Keyboard
  • 2.1.2 No keyboard trap
  • 2.1.4 Character key shortcuts (v2.1)
  • 2.2.2 Pause, stop, hide
  • 2.4.1 Bypass blocks
  • 2.4.2 Page titled
  • 2.4.3 Focus order
  • 2.4.4 Link purpose (in context)
  • 3.1.1 Language of page
  • 3.2.1 On focus
  • 3.2.2 On input
  • 3.3.1 Error identification
  • 3.3.2 Labels or instructions
  • 4.1.1 Parsing
  • 4.1.2 Name, role, value

Level AA

  • 1.2.5 Audio description (pre-recorded)
  • 1.3.5 Identify input purposes (v2.1)
  • 1.3.4 Orientation (v2.1)
  • 1.4.3 Contrast (minimum)
  • 1.4.4 Resize text
  • 1.4.5 Images of text
  • 1.4.10 Reflow (v2.1)
  • 1.4.11 Non-text contrast (v2.1)
  • 1.4.12 Text spacing (v2.1)
  • 1.4.13 Content on hover or focus (v2.1)
  • 2.4.5 Multiple ways
  • 2.4.6 Headings and labels
  • 2.4.7 Focus visible
  • 3.1.2 Language of parts
  • 3.2.3 Consistent navigation
  • 3.2.4 Consistent identification
  • 3.3.3 Error suggestion
  • 3.3.4 Error prevention (legal, financial, data)
  • 4.1.3 Status messages (v2.1)

Level AAA

  • 1.2.7 Extended audio description
  • 1.2.8 Media alternatives (pre-recorded)
  • 1.3.5 Identify purpose (v2.1)
  • 1.4.6 Contrast (enhanced)
  • 1.4.8 Visual presentation
  • 1.4.9 Images of text (no exception)
  • 2.1.3 Keyboard (no exception)
  • 2.2.4 Interruptions
  • 2.4.8 Location
  • 2.4.9 Link purpose (link only)
  • 2.4.10 Section headings
  • 2.5.5 Target size (v2.1)
  • 2.5.6 Concurrent input mechanisms (v2.1)
  • 3.2.5 Change on request
  • 3.3.6 Error prevention (all)

Day 96: WCAG and Cognitive Disabilities

Even as I’m learning about design, it still comes down to me focusing more on people, their abilities, and the way they interact with the web. As usual, some learning leads to more questions, and more discoveries.

Things I accomplished

What I reviewed today

  • WCAG success criteria that benefit people with cognitive disabilities;
  • overview of ATAG, Part B (Authoring Tool Accessibility Guidelines):
    1. Fully automatic processes produce accessible content
    2. Authors are supported in producing accessible content
    3. Authors are supported in improving the accessibility of existing content
    4. Authoring tools promote and integrate their accessibility features
  • design considerations for various disability categories
  • accessibility-first mindset:
    • avoid exclusive design patterns
    • embrace diversity
    • create inclusive design

What I learned from it

The following lists target WCAG success criteria that benefit people with cognitive disabilities.

Level A

  • 2.5.1 Pointer gestures (v2.1)
  • 2.5.3 Label in Name (v2.1)
  • 2.5.4 Motion actuation (v2.1)
  • 3.3.1 Error identification
  • 3.3.2 Labels or instructions

Level AA

  • 1.3.4 Orientation (v2.1)
  • 1.3.5 Identify input purposes (v2.1)
  • 1.4.10 Reflow (v2.1)
  • 1.4.12 Text spacing (v2.1)
  • 1.4.13 Content on hover or focus (v2.1)
  • 2.5.6 Concurrent input mechanisms (v2.1)
  • 3.2.3 Consistent navigation
  • 3.2.4 Consistent identification
  • 3.3.3 Error suggestion
  • 3.3.4 Error prevention (legal, financial, data)

Level AAA

  • 1.3.6 Identify purpose (v2.1)
  • 2.2.6 Timeouts (v2.1)
  • 2.3.3 Animation from interactions (v2.1)
  • 3.1.3 Unusual words
  • 3.1.4 Abbreviations
  • 3.1.5 Reading level
  • 3.1.6 Pronunciation
  • 3.2.5 Change on request
  • 3.3.5 Help
  • 3.3.6 Error prevention (all)

Day 95: Designing an Accessible User Experience, Part 3

Today’s dedicated accessibility time was spent finishing walking through the topic of designing an accessible user experience, per continuation of Part 2.

Things I accomplished

  • Continued Deque’s “Designing an Accessible User Experience” course. 85% complete.
  • Continued reading A Web for Everyone. 8% complete.

What I reviewed today

  • Ability + Barrier = Disability;
  • Design + Accessibility = Inclusive Design;
  • UX for blind: audio-structural experience and interaction;
  • JAWS keystrokes (Insert + F3, Insert + Ctrl + R);
  • UX for deafblind: tactile-structural text-only;
  • UX for deaf: silent-visual;
  • Cognitive disabilities

What I learned from it

It’s usually best to keep the number of landmarks to a relatively short list, because part of the point of landmarks is to make it faster and easier to find things. The more landmarks there are, the less they help make things faster or easier

The most unique challenge for deafblindness is multimedia content. Solutions:

  • think text-first
  • create a simple design
  • use semantic structure
  • offer control over timing
  • use common words/phrases
  • apply screen reader techniques

WebVTT is one of the most versatile caption formats because users can set preferences like color, size, and font at system-level, which can trickle to browser-level.

WCAG 2.1 adds in some consideration for cognitive disabilities, but there is so much more to be considered, yet can’t be quantified as success criteria. Challenges to understand when considering a variety of traits under the cognitive disabilities category:

  • complex concepts
  • abstraction
  • sarcasm and satire
  • self versus others
  • problem-solving and critical thinking
  • speed
  • memory
  • attention
  • reading
  • speech and language
  • math
  • behavior
  • visual perception

Horton & Quesenbery constructed 9 design principles for incorporating accessibility into a website or application:

  1. people first: designing for differences
  2. clear purpose: well-defined goals
  3. solid structure: built to standards
  4. easy interaction: everything works
  5. helpful wayfinding: guides users
  6. clean presentation: supports meaning
  7. plain language: creates a conversation
  8. accessible media: supports all senses
  9. universal usability: creates delight

Best statement of the day

“The more you can think in terms of the semantic structure, the more successful you will be at creating a good user experience for screen reader users.”

Day 88: Presentation Prep – Humanizing People with Disabilities

Today I spent a lot of time preparing for a library conference talk about accessible spaces and being mindful of people with disabilities. So, rather than go into further study with Deque courses or deep-diving into accessibility laws (as originally planned), I decided to blog about the presentation I prepared for.

Things I accomplished

  • Compiled an outline and draft of slides for the presentation.
  • Interviewed three people about their disability.

What I reviewed today

My presentation is actually a co-presentation. Speaking alongside two other people, my part will specifically focus on the “who” of creating accessible workstations and spaces. Hopefully, the following outline will fit into a 15-minute time frame:

  1. What is a disability?
    1. Definition
    2. General categories
    3. Specific categories
    4. Specific disabilities
    5. Spectrums
    6. Related categories (elderly, environmental, temporary)
  2. Assistive Technologies & Adaptive Strategies
    1. Screen readers
    2. Magnification & zoom
    3. High contrast mode & custom styles
    4. Switch access and control
    5. Speech recognition
    6. Eye-tracking
    7. Augmentative and Alternative Communications (AAC)
  3. What is accessibility?
    1. Definition
  4. So, who are these people, anyway?
    1. Stephen
    2. Michael
    3. Chrissie
    4. How many Alaskans?
    5. Julie
    6. Tracy
    7. Me
    8. Who do you know?
  5. The point: They are people
  6. How can we be accommodating?
  7. Contact me

The overall intent of my talk is to humanize disabilities. What I really enjoyed about today’s preparation was the opportunity to talk with other people about their disabilities, and hear about the barriers they’ve encountered that made them feel disabled. The most fascinating part was that, out of the three people I interviewed, no one considered themselves disabled or having a disability. Only when they encountered a challenge or a complete roadblock did they consider themselves as having a disability.

Day 86: The point – it’s for people with disabilities, Part 2

A continuation of Part 1 as I work through the Deque courses and review who I am doing this work for.

Things I accomplished

What I reviewed today

Disabilities that I reviewed today through the Deque course:

  • deaf
  • deafblind
  • motor disabilities
  • speech disabilities
  • cognitive disabilities
  • reading disabilities
  • seizures
  • multiple disabilities

Deaf

How they may interact:

  • utilize captions and transcripts for video

Developer considerations:

  • offer transcript alongside an audio file (WCAG 1.2.1, 1.2.4)
  • offer captions alongside video with audio (WCAG 1.2.2, 1.2.9)
  • when possible, offer sign language with videos with audio (WCAG 1.2.6)

Review Day 55: Users with Auditory Disabilities.

Deafblind

How they may interact:

  • interacts with keyboard (QWERTY or braille)
  • receives information through refreshable braille display and screen reader software

Developer considerations:

  • content needs to be text or coupled with text equivalents (WCAG 1.1)
  • site functionality must work with a keyboard (WCAG 2.1)
  • markup must be structured well, using appropriate semantics (WCAG 1.3, 2.4, & 4.1.1)
  • custom elements must express themselves with a name, role, and value (WCAG 4.1.2)
  • dynamic changes in content comes with an alert for screen readers (WCAG 4.1.3)
  • videos need audio description if the audio is confusing by itself (WCAG 1.2)
  • active controls need to be clickable (WCAG 2.5)
  • offer transcript alongside an audio file (WCAG 1.2.1, 1.2.4)
  • offer captions alongside video with audio (WCAG 1.2.2, 1.2.9)
  • when possible, offer sign language with videos with audio (WCAG 1.2.6)

Motor disabilities

Motor disabilities includes a wide spectrum of varying degrees and characteristics of physical experiences, challenges, and strategies. Specific disabilities include: cerebral palsy, ALS, quadriplegia, or missing limbs. Review Day 54: Users with Motoric Disabilities.

How they may interact:

  • mouth stick on keyboard (vertical or horizontal)
  • adaptive keyboard (one-handed, expanded, raised keys, etc.)
  • switch control devices
  • speech recognition software
  • eye tracking software

Developer considerations:

  • site functionality must work with a keyboard (WCAG 2.1)
  • interactive components (links, buttons, input) need a visible focus and hover state (WCAG 1.4.13)
  • warn users about time outs ahead of time, and offer extension of time (WCAG 2.2)
  • mark interactive controls large clickable targets (WCAG 2.5.5)

Speech disabilities

The causes of speech disabilities range from learning, motor, or auditory disabilities, autism, brain injury, stroke, cancer. They may or may not have full use of their voice and how they use that voice. Some issues can be categorized as stuttering, cluttering, apraxia, dysarthria, speech sound disorders, or non-vocal.

How they may interact:

  • unaided augmentative and alternative communication (AAC): body language expressions, gestures
  • aided augmentative and alternative communication (AAC): pen & paper, boards with symbols, speech-to-text software

Developer considerations:

  • provide other input methods other than voice input (WCAG 2.5.6)

Cognitive disabilities

Cognitive disabilities cannot be easily defined due to its wide spectrum. Some characteristics may include: limited comprehension, low tolerance for cognitive overload, limited problem-solving skills, short-term memory loss, attention deficit, difficulty reading, and difficulty understanding math. Review Day 53: Users with Cognitive Disabilities. It is the most common disability, due to its wide spectrum.

How they may interact:

Developer considerations:

  • create a simple interface (WCAG 1.4.8, 1.4.12)
  • write clear, direct, and easy to understand content, which includes a mixture of images and text (WCAG 3.1, 1.3.3)
  • post shorter videos and audio tracks
  • limit the number of choices offered at one time
  • offer help features (WCAG 3.3.5)
  • design for ease of use
  • test for usability with actual users with this disability
  • strive for consistency of information, navigation, and landmarks across the website (WCAG 3.2.3, 3.2.4)
  • reduce or allow control of distracting elements (motion, animation, autoplay) on a page 2.2.2)
  • warn users about time outs ahead of time, and offer extension of time (WCAG 2.2)
  • avoid use of Captcha

Reading disabilities

This could be caused by a cognitive disability or an another underlying reason. Review Day 52: Users with Reading Difficulties.

How they may interact:

  • customize foreground and background colors
  • customize typography
  • listen to text with a screen reader
  • use a screen reader for highlight text to follow along

Developer considerations:

  • include a mixture of images and text to convey the same information (WCAG 3.1, 1.3.3)
  • use good color contrast, but avoid the highest level, like black on white (WCAG 1.4.3, 1.4.6)
  • provide flexibility of user customization of styles for text and background (WCAG 4.1.1)

Seizures

How they may interact:

  • reduce animation and pause or skip video

Developer considerations:

  • avoid using video, transitions, and animations with frequent intense flashing (WCAG 2.3)

Multiple disabilities

A person deals with two or more disabilities.

How they may interact:

  • see all considerations under blind, low vision, deaf, deafblind, motor disabilities, speech disabilities, cognitive disabilities, reading disabilities, and seizures

Developer considerations:

  • see all considerations under blind, low vision, deaf, deafblind, motor disabilities, speech disabilities, cognitive disabilities, reading disabilities, and seizures

Day 85: The point – it’s for people with disabilities, Part 1

“…if you don’t understand the challenges that people with disabilities face when using ICT products and services, you don’t really know accessibility. Knowing what challenges people face is central to knowing how to reduce or eliminate challenges.” Karl Groves, What does it take to call yourself an accessibility expert?

That about sums it up for me after working through the WAS Body of Knowledge. The only way one can evaluate websites well is to remember who could be using our sites and how their engagement and experience may differ from our own. That’s basic UX (user experience) design, but with a focus on users with disabilities, which still encompasses a wide range of people and engagement strategies.

Things I accomplished

What I reviewed today

I’m trying to bring my focus back to the “who” part of my training. Without keeping them in the front of my mind, I will not be able to properly advocate for accessibility. Deque’s course highlights the following disabilities:

  • blind
  • low vision
  • color-blind
  • deaf
  • deafblind
  • motor disabilities
  • speech disabilities
  • cognitive disabilities
  • reading disabilities
  • seizures
  • multiple disabilities

Today I read through their explanations about various visual impairments. I found it helpful to revisit things I learned about in the past about users with low vision and identifying issues for keyboard users.

Blind

How they may interact:

  • may navigate by headings, landmarks, links via screen reader software;
  • listen for title and structure details of page via screen reader software;
  • use screen reader software, keyboard, refreshable braille display, touchscreen, or voice commands for input or output

Developer considerations:

  • content needs to be text or coupled with text equivalents (WCAG 1.1)
  • site functionality must work with a keyboard (WCAG 2.1)
  • markup must be structured well, using appropriate semantics (WCAG 1.3, 2.4, & 4.1.1)
  • custom elements must express themselves with a name, role, and value (WCAG 4.1.2)
  • dynamic changes in content comes with an alert for screen readers (WCAG 4.1.3)
  • videos need audio description if the audio is confusing by itself (WCAG 1.2)
  • active controls need to be clickable (WCAG 2.5)

Low Vision

Low vision is a spectrum. It varies in degrees and characteristics.

How they may interact:

  • magnify the entire screen (with magnification software), zoom into web pages, or increase text size
  • increase contrast or invert colors with High Contrast Mode or other software
  • use screen reader to hear text
  • navigate by keyboard or mouse

Developer considerations:

  • popups, alerts, and errors should be close to the visual focus
  • color should not be the only way to relay important information (WCAG 1.4.1)
  • contrast of foreground and background should be no less than 4.5:1 (WCAG 1.4.3, 1.4.6, 1.4.11)
  • don’t disable pinch-to-zoom
  • interactive components (links, buttons, input) need a visible focus and hover state (WCAG 1.4.13)
  • controls need to look different (actionable) than text (WCAG 1.4.8)

Color-blind

This is not an either-or characteristic either. Degrees of color identification vary from person to person.

How they may interact:

  • strategies to compensate may involve asking for help to distinguish colors

Developer considerations:

  • color should not be the only way to relay important information (WCAG 1.4.1)

Day 82: Testing with Users with Disabilities

Today’s study session led me back to usability testing. This seems to be critical when it comes to adding it to our testing toolbox to check for usability and accessibility issues that escape conformance checks, alongside automated and manual testing tools.

Personally, this is one of the areas I struggle with implementing. I love reading about usability testing and case studies that people document, but I’ve not yet taken the opportunity to try doing this myself. Usually because it has it’s own added cost, as well as awkwardness to set up testing with a specific group of people. Maybe this will be my motivator to make some connections and start a plan to make this happen this year.

Things I accomplished

Read:

What I learned today

The guidelines are not all-inclusive. Some good accessibility techniques may not be in WCAG because:

  • It is difficult to objectively verify compliance with the technique
  • The writers of the guidelines did not recognize the need for the technique when writing the guidelines.
  • The technique was not necessary (or at least not anticipated) at the time the guidelines were written, because the technologies or circumstances that require the technique are newer than the guidelines.

Before bringing in users for testing, do some preliminary checks and fix known issues in order to better discover underlying accessibility and usability challenges that were not detectable by software or manual checks.

Including users in testing doesn’t have to be a full-blown usability study. Informal evaluations and brief interactions with feedback can be very helpful. Additionally, informal evaluations can happen throughout the product’s lifecycle, rather than formal usability studies that usually occur near the end of development. Bonus: informal interactions can help us all see the person clearer rather than a case study.

Never assume that feedback from one person with a disability speaks for all people with disabilities. A small-scale evaluation (only a few people within a study) is not enough to draw solid conclusions with statistical significance, even though valuable insight occurs. Try to include a variety of disabilities: auditory, cognitive, neurological, physical, speech, and visual with different characteristics. If possible, include older people, as well.

Further reading

 

Day 62: Identifying A11y Issues for Voice Input Users

Speech input software is an assistive technology and strategy that people use when they have difficulty using a keyboard or mouse. This may include people with motor, visual, or cognitive disabilities. In the 21st century, it’s an excellent alternative for people in all walks of life.

Things I accomplished

Watched:

Read:

What I learned today

Windows 10 has built-in speech recognition?? It sounds like a combination of Cortana and Speech Recognition could be a cheap alternative to Dragon, but I’d need to experiment a bit with both to compare.

Apple has a Dictation feature. So, somewhat like Windows, a combination of Siri and Dictation could be used. I’ve avoided setting up dictation just because of the privacy flag that pops up when it asks permission to connect to an Apple server and learn from your voice over the Internet. Maybe I’m just paranoid and they all actually work that way?

Dragon offers some ARIA support, but it appears to be limited, and should be tested if relying on aria-label, specific roles, etc.

Love this catchphrase from the Web accessibility perspectives video:

“Web accessibility: essential for some, useful for all.”

Challenges that people who use speech recognition software face on the web:

  • carousels that move without a pause button
  • invisible focus indicators
  • mismatched visual order and tab order
  • mismatched linked image with text and alternative text
  • duplicate link text (e.g. Read More) that leads to different places
  • form controls without labels
  • hover only menus (MouseGrid can struggle accessing these)
  • small click targets
  • clickable items that don’t look clickable
  • too many links

Designers and developers should focus on WCAG’s Operable principle. In particular, Navigable guideline’s success criteria would apply here. If many of those success criteria are met with other users in mind, it will definitely be beneficial to speech recognition users, too.

In the past, I haven’t personally been interested in software, like Dragon, yet looking from an accessibility point of view, I’m ready to start testing with speech input technology to better understand how it works and affects people who rely on it when interacting with the web.