Usability Test Specification
Principal techniques for usability testing are:
- Inspecting, evaluating or reviewing
- Dynamically interacting with prototypes
- Verifying and validating the actual implementation
- Conducting surveys and questionnaires
Inspecting, evaluating or reviewing
Inspection or review of the requirements specification and designs from a usability perspective that increase the user’s level of involvement can be cost effective by finding problems early. Heuristic evaluation (systematic inspection of a user interface design for usability) can be used to find the usability problems in the design so that they can be attended to as part of an iterative design process. This involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the “heuristics”). Reviews are more effective when the user interface is more visible. For example, sample screen shots are usually easier to understand and interpret than a narrative description of the functionality provided by a particular screen. Visualization is important for an adequate usability review of the documentation.
Dynamically interacting with prototypes
When prototypes are developed, the Test Analyst should work with the prototypes and help the developers evolve the prototype by incorporating user feedback into the design. In this way, prototypes can be refined and the user can get a more realistic view of how the finished product will look and feel.
Verifying and validating the actual implementation
Where the requirements specify usability characteristics for the software (e.g., the number of mouse clicks to accomplish a specific goal), test cases should be created to verify that the software implementation has included these characteristics.
For performing validation of the actual implementation, tests specified for functional system test may be developed as usability test scenarios. These test scenarios measure specific usability characteristics, such as learnability or operability, rather than functional outcomes.
Test scenarios for usability may be developed to specifically test syntax and semantics. Syntax is the structure or grammar of the interface (e.g., what can be entered in an input field) whereas semantics describes the meaning and purpose (e.g., reasonable and meaningful system messages and output provided to the user) of the interface.
Black box techniques, particularly use cases which can be defined in plain text or with UML (Unified Modeling Language), are sometimes employed in usability testing.
Test scenarios for usability testing also need to include user instructions, allocation of time for pre- and post-test interviews for giving instructions and receiving feedback and an agreed protocol for conducting the sessions. This protocol includes a description of how the test will be carried out, timings, note taking and session logging, and the interview and survey methods to be used.
Conducting surveys and questionnaires
Survey and questionnaire techniques may be applied to gather observations and feedback regarding user behavior with the system. Standardized and publicly available surveys such as SUMI (Software Usability Measurement Inventory) and WAMMI (Website Analysis and MeasureMent Inventory) permit benchmarking against a database of previous usability measurements. In addition, since SUMI provides concrete measurements of usability, this can provide a set of completion / acceptance criteria.
It is important to consider the accessibility to software for those with particular needs or restrictions for its use. This includes those with disabilities. Accessibility testing should consider the relevant standards, such as the Web Content Accessibility Guidelines, and legislation, such as Disability Discrimination Acts (UK, Australia) and Section 508 (US). Accessibility, similar to usability, must be considered during the design phases. Testing often occurs during the integration levels and continues through system testing and into the acceptance testing levels. Defects are usually determined when the software fails to meet the designated regulations or standards defined for the software.