UX Series 4: Digital Accessibility and the UX Testing Process
The previous post in this series discussed the user experience (UX) design process, and it emphasized the importance of gathering evidence to make design decisions. This post focuses on the role that evaluation and testing play in user experience design, and how you can integrate accessibility into the evaluation process.
When we design with users in mind, we should evaluate our designs from the user perspective at the earliest possible opportunity. This helps reduce the effort and cost of adjustments that we need to make in response to what we learn from the evaluation later on. It reduces the risk of discovering a user-experience problem later on in the life cycle, when it may be too difficult or too costly to fix. This principle applies to accessibility too. Early evaluation of digital products with disabled users means we can identify and resolve accessibility issues before it’s too late to address them.
Evaluation in the UX design life cycle
The method that we use to evaluate a design’s UX depends on the nature of the design and what we want to learn from the evaluation. For user experience, the evaluation’s focus is—obviously—the user.
The simplified UX design process involves two overlapping phases. In the discovery phase, we conduct research to define as accurately as possible the problem that we want to solve. In the design phase, we aim to create the best possible solution.
Evaluation during the UX design process can happen in either phase—at any time that we have an idea that we want to test. An evaluation might be something extremely abstract and conceptual, like discovering the most effective way to name and organize content before designing the architecture of a website. Or it might be something more concrete, like testing the usability of a functional prototype for a checkout process.
What do we evaluate?
At the start of the design phase, the things that we create are simple and rough so that we can prototype initial ideas. For example:
- User flows that demonstrate how a user moves through a series of components and screens to complete tasks
- Wireframes that illustrate how components of a screen might be laid out and how layout changes at different screen sizes
- Sitemaps that list all screens/pages in a site or an app
These designs might be sketches on paper or straightforward annotated digital diagrams with limited or no interactivity. This means that an evaluation might be limited to visually inspecting a design or responding to an idea’s description.
As we evaluate and modify our designs in response, we become more confident in the solutions we generate. Evaluation lets us create things that are more functional, more visually polished, and more reflective of how the final product will look and behave.
When designs become more functional, evaluations can focus more on observing how people behave when they try to use our designs.
How do we include accessibility in evaluation?
In the UX design process we can think about evaluating in two ways:
- Evaluating against principles, standards, and guidelines
- Evaluating with people
Depending on resources and timing, both approaches can give us valuable insights into areas where we need to improve. And both approaches can include a focus on accessibility and people with disabilities.
Let’s look at each in turn.
Evaluating against principles, standards, and guidelines
Over the years, evidence-based best practice in user-centered design has become codified in different ways that we can use to help inspect our designs. For example, with a functional prototype we could identify potential issues that users might experience by:
- Using usability principles such as Nielsen’s usability heuristics
to perform a heuristic evaluation.
- Inspecting the prototype using the Inclusive Design Principles
- Conducting an accessibility review using the Web Content Accessibility Guidelines (WCAG)
If we’re evaluating a design asset that has limited functionality or purpose, we need to select only those principles, guidelines, or parts of a standard that apply. For example, if we were to evaluate a visual design’s specification for accessibility, we’d refer only to the parts of an accessibility standard that cover visual accessibility.
An evaluation against best practice can help you identify whether there may be issues and help you prioritization efforts for removing those potential barriers. Accessibility standards and guidelines can be used to define automated and manual tests on code as part of the development.
But to reiterate—principles, standards and guidelines are necessarily generic and will only get us so far in user experience design. To really appreciate how effective our design is for its users and usage context, we need to involve people in evaluations.
Evaluating with people
Ultimately, as we move through the product-development life cycle, we need to gather direct feedback from users to optimize the user experience. And we need to do so strategically.
We can do so using methods like surveys and interviews. But rather than asking selected people to give us an opinion on a design, we get the best insights from observing people using it. Watching behavior and asking questions about their behavior yields much more reliable and useful data than an opinion based on looking at a design, or a prediction on how effective it might be in an imagined future. And we get the most reliable data from representative users rather than members of the design team.
Probably the most recognizable method for UX evaluation is usability testing. This involves asking participants to do one or more tasks using a digital product, observing what they do, and asking questions about what they did. It has particular value in gathering insights into their behavior, and in identifying the location and impact of barriers to effective use.
We can conduct usability testing in many ways—remote or in person, informal or formal. The flexibility of the methods makes it an extremely valuable part of the UX design and evaluation process, even where time and resources may be limited. And when usability testing involves people with disabilities, we get additional insights on the impact of an accessibility barrier on a task by observing how users respond to encountering the barrier. We might also realize that the same barrier will also affect other users in other situations, for example, people using a mobile device or accessing via a limited bandwidth data connection.
In the next post in this series, we’ll look in more detail at methods for conducting usability testing with people with disabilities, including adapting methods for remote usability testing to be inclusive.
This article is one of a series of introductory articles explaining the importance of user experience (UX) to digital accessibility strategy and practice. Read all posts in the series:
- UX Series 1: Universal Design and Digital Accessibility
- UX Series 2: User Experience and Digital Accessibility
- UX Series 3: Digital Accessibility and the UX Design Process
- UX Series 5: Usability Testing and Digital Accessibility
- UX Series 6: Connecting UX with Digital Accessibility Strategy
For more in-depth information, read our Inclusion Blog’s UX articles. To learn more how we can help you integrate UX best practices into your digital accessibility strategy, view our UX services or contact us.
David Sloan is User Experience Research Lead with The Paciello Group. He joined TPG in May 2013, after nearly 14 years researching, teaching and providing consultancy on accessibility and inclusive design at the University of Dundee in Scotland. He is an active participant in a number of W3C accessibility-focused groups, and is an Advisory Committee member of the annual W4A Cross-Disciplinary Conference on Web Accessibility.
Need help with your specific accessibility needs?