Step 6: Testing & Validation
We all know that requesting an accessible technology product and actually getting one are two different things. All too often, customers take vendors at their word when they say a product is accessible, and then accept it without testing the product and validating the claim. Unfortunately, those customers then often find themselves stuck with a product that’s not fully accessible. Unless they’ve built remediation requirements into their contracts, they have no recourse for fixing the accessibility barriers—and the vendor is off the hook.
The key to avoiding such scenarios is to make acceptance testing a core part of your procurement process. These formal tests—conducted by the customer upon delivery of the product—determine whether the ICT product satisfies the acceptance criteria in your contract. Most mature procurement operations do some form of acceptance testing already, evaluating the product against stated requirements. But not all acceptance tests factor in accessibility.
The guidance in Step 6 is here to help you change that. It offers background on accessibility testing best practices for you to consider incorporating into the evaluation and validation phase of your procurement process.
So let’s talk testing.
A good accessibility testing process entails the testing itself, as well as accurate and comprehensive reporting on the results. Best practices include the following:
Embrace Comprehensive Testing
Numerous automated testing tools in the marketplace can run a cursory accessibility check on your digital products. While they can be a good start, there is general consensus that manual accessibility testing is an essential part of the process.
Watch the archived webinar “The Importance of User Testing for Accessibility”
For example, an automated test can tell you whether all images have alternative text, but only a manual test can indicate whether an image actually requires a text alternative (it might just be used for formatting a screen, for example) or whether the alternative text makes sense within the context of the image. Given that many work-related technologies are designed to enable users to accomplish a goal and not just read text or watch videos, a purely guideline-driven testing approach will often be weaker than one based on a list of tasks users actually have to perform.
Procedurally, it makes sense to perform all automated tests and then have a skilled analyst go through the results to confirm the errors and perform the necessary manual tests. It is important to note that exact testing protocols are still more art than science and sometimes viewed as proprietary. However, as the domain of accessible information and communication technology (ICT) matures, there is more consensus and sharing.
Tip: Check out the automated tools available for testing software and web content (note that the same cautions mentioned above apply):
Incorporate Scenario-Based Testing
You can also perform effective accessibility testing by creating persona-based test cases using a scenario model. In these cases, you gather relevant information about who your primary users are and add the possibility of differing abilities to the dimension of that persona.
For instance, let’s envision you are developing an employee travel system. Searching for flights, booking hotel rooms, and comparing car rental prices would all be scenarios to include in the testing phase. Taking that idea to the next level, creating personas allows your coverage to include overlooked populations. For example, one persona could be someone who uses a screen-reading application, either built into certain devices or installed separately. Then, you would run all the scenarios listed above using available screen-reading applications. Other possible personas could include people who use hearing aids, persons with limited dexterity, and others who may require large or high-contrast fonts and displays.
The more your testing can mimic potential users, including users with disabilities, the better your final product will be. Thorough testing early on could reduce time-to-market for subsequent updates or new releases as you establish a mechanism for testing, reporting, and addressing the findings that this type of testing methodology can reveal.
Check Compatibility of Screen Readers and Other Assistive Technology
If possible, ensure interoperability by testing with all the AT devices commonly used by the people who will use your product. Particularly in the case of screen readers, your testing should be comprehensive. Use the same task list for users with and without disabilities, and thoroughly exercise the interface, navigating by several different methods to accomplish each task.
One thing to keep in mind is your testers’ level of expertise compared to your expected users. Not every AT user knows how to use the most advanced features of their AT products. So while it’s okay to have more advanced AT users test highly sophisticated or complex products, be sure to ask users to also test the more basic functions.
Remember Documentation and Support
While the product’s own accessibility is the most important thing to test, everything the user encounters should be accessible. Be sure to test the user manuals and online help features provided by the vendor. If you have live support for the product, provide appropriate details to your help desk personnel so they can answer common questions and understand when to forward more complex situations.
Report Test Results and Remediate
Report your test results to the vendor, and make a plan for remediating them in accordance with the terms of your contract. Beyond the obvious advantages for your product, this can be a useful exercise for the vendor because the lessons learned can be incorporated into their current and future product development.
Tip: Check out this guidance on developing remediation plans —
- Electronic Information Resources Accessibility Implementation and Remediation Plan (Texas Department of Information Resources)