What makes an Accessible iOS UITextField?
The other day a conversation started about some accessibility advice from Apple that had been copied by another accessibility expert friend of mine. Two brilliant mobile accessibility experts disagreed. Let's see why...
Experts sometimes disagree. We should come together and talk and see why we disagree.
The Discussion
The general discussion is whether it's a good idea to hide the visible Label of a UITextField from VoiceOver, because the user gets that text from the Label of the UITextField already. So why repeat that announcement?
If you don't hide the label you get this experience:
Swipe Right: Username
Swipe Right: Username, Kermit
Swipe Right: Password
Swipe Right: Password, MsP1ggy!
If you do hide the label you get
Swipe Right: Username, Kermit
Swipe Right: Password, MsP1ggy!
The second experience is a better gesture based experience, but involves hiding a control from touch to explore users. Notably, the information is not hidden.
So Follow WCAG...
The problem is WCAG doesn't really answer this question well. What do we do in an environment where users generally lack the ability to move input focus?
The nature of Markup Languages vs Compiled Languages means that WCAG lacks the nuance to come to consensus on a topic that has Accessibility Implications at the component level.
Component level: keeping assistive technology interactions consistent for components (Buttons, TextFields, etc) is a fundamental accessibility concern.
DOM elements in HTML can hold references to one another and exchange information quite simply. Since these HTML elements can reference each other and exchange information it is the assistive technologies choice about how it executes that experience. Users can pick the experience they prefer by choosing to use Jaws over NVDA or by using the arrow key instead of the tab key on their keyboard.
The Gap for Mobile
The problem for mobile is the list of times WCAG relies on the "they should be associated with one another" gets really high for fields that can have error messaging and instructions... etc. So the place to put this information becomes not obvious and meaningful order isn't always meaningful enough.
As someone who appreciates encouraging creativity and minimal conformance in my guidance I generally support any solution that presents the user all of the information in a reasonable order. As someone who has done research with a lot of Screen Reader users on a lot of different apps I also know there is an optimal middle ground.
My hope is that by providing a way to discuss these techniques and explore them in environments where we can iterate on them quickly we can come to consensus on the core issues that matter for component consistency.
Since I'm terrible with words the rest of this blog post is a demo in the iOS Accessibility Simulator.