Testing Software Quality Characteristics

Testing Software Quality Characteristics

Introduction

This article considers the application of those techniques in evaluating the principal characteristics used to describe the quality of software applications or systems.

The description of product quality characteristics provided in ISO 9126 is used as a guide to describing the characteristics. Other standards, such as the ISO 25000 series (which has superseded ISO 9126) may also be of use. The ISO quality characteristics are divided into product quality characteristics (attributes), each of which may have sub-characteristics (sub-attributes).

The Test Analyst should concentrate on the software quality characteristics of functionality and usability. Accessibility testing should also be conducted by the Test Analyst. Although it is not listed as a sub-characteristic, accessibility is often considered to be part of usability testing. Testing for the other quality characteristics is usually considered to be the responsibility of the Technical Test Analyst.

The sub-characteristic of compliance is shown for each of the quality characteristics. In the case of certain safety-critical or regulated environments, each quality characteristic may have to comply with specific standards and regulations (e.g., functionality compliance may indicate that the functionality comply with a specific standard such as using a particular communication protocol in order to be able to send/receive data from a chip). Because those standards can vary widely depending on the industry, they will not be discussed in depth here. If the Test Analyst is working in an environment that is affected by compliance requirements, it is important to understand those requirements and to ensure that both the testing and the test documentation will fulfill the compliance requirements.

For all of the quality characteristics and sub-characteristics discussed in this section, the typical risks must be recognized so that an appropriate testing strategy can be formed and documented. Quality characteristic testing requires particular attention to life cycle timing, required tools, software and documentation availability, and technical expertise. Without planning a strategy to deal with each characteristic and its unique testing needs, the tester may not have adequate planning, ramp up and test execution time built into the schedule. Some of this testing, e.g., usability testing, can require allocation of special human resources, extensive planning, dedicated labs, specific tools, specialized testing skills and, in most cases, a significant amount of time. In some cases, usability testing may be performed by a separate group of usability, or user experience, experts.

Quality characteristic and sub-characteristic testing must be integrated into the overall testing schedule, with adequate resources allocated to the effort. Each of these areas has specific needs, targets specific issues and may occur at different times during the software development life cycle, as discussed in the sections below.

While the Test Analyst may not be responsible for the quality characteristics that require a more technical approach, it is important that the Test Analyst be aware of the other characteristics and understand the overlap areas for testing. For example, a product that fails performance testing will also likely fail in usability testing if it is too slow for the user to use effectively. Similarly, a product with interoperability issues with some components is probably not ready for portability testing as that will tend to obscure the more basic problems when the environment is changed.

Quality Characteristics for Business Domain Testing

Functional testing is a primary focus for the Test Analyst. Functional testing is focused on “what” the product does. The test basis for functional testing is generally a requirements or specification document, specific domain expertise or implied need. Functional tests vary according to the test level in which they are conducted and can also be influenced by the software development life cycle. For example, a functional test conducted during integration testing will test the functionality of interfacing modules which implement a single defined function. At the system test level, functional tests include testing the functionality of the application as a whole. For systems of systems, functional testing will focus primarily on end to end testing across the integrated systems. In an Agile environment, functional testing is usually limited to the functionality made available in the particular iteration or sprint although regression testing for an iteration may cover all released functionality.

A wide variety of test techniques are employed during functional test. Functional testing may be performed by a dedicated tester, a domain expert, or a developer (usually at the component level).

In addition to the functional testing covered in this section, there are also two quality characteristics that are a part of the Test Analyst’s area of responsibility that are considered to be non-functional (focused on “how” the product delivers the functionality) testing areas. These two non-functional attributes are usability and accessibility.

The following quality characteristics are considered in this section:

  • Functional quality sub-characteristics
    • Accuracy
    • Suitability
    • Interoperability
  • Non-functional quality characteristics
    •  Usability
    • Accessibility

Accuracy Testing

Functional accuracy involves testing the application’s adherence to the specified or implied requirements and may also include computational accuracy. Accuracy testing employs many of the test techniques and often uses the specification or a legacy system as the test oracle. Accuracy testing can be conducted at any stage in the life cycle and is targeted on incorrect handling of data or situations.

Suitability Testing

Suitability testing involves evaluating and validating the appropriateness of a set of functions for its intended specified tasks. This testing can be based on use cases. Suitability testing is usually conducted during system testing, but may also be conducted during the later stages of integration testing. Defects discovered in this testing are indications that the system will not be able to meet the needs of the user in a way that will be considered acceptable.

Interoperability Testing

Interoperability testing tests the degree to which two or more systems or components can exchange information and subsequently use the information that has been exchanged. Testing must cover all the intended target environments (including variations in the hardware, software, middleware, operating system, etc.) to ensure the data exchange will work properly. In reality, this may only be feasible for a relatively small number of environments. In that case interoperability testing may be limited to a selected representative group of environments. Specifying tests for interoperability requires that combinations of the intended target environments are identified, configured and available to the test team. These environments are then tested using a selection of functional test cases which exercise the various data exchange points present in the environment.

Interoperability relates to how different software systems interact with each other. Software with good interoperability characteristics can be integrated with a number of other systems without requiring major changes. The number of changes and the effort required to perform those changes may be used as a measure of interoperability.

Testing for software interoperability may, for example, focus on the following design features:

  • Use of industry-wide communications standards, such as XML
  • Ability to automatically detect the communications needs of the systems it interacts with and adjust accordingly

Interoperability testing may be particularly significant for organizations developing Commercial Off The Shelf (COTS) software and tools and organizations developing systems of systems.

This type of testing is performed during component integration and system testing focusing on the interaction of the system with its environment. At the system integration level, this type of testing is conducted to determine how well the fully developed system interacts with other systems. Because systems may interoperate on multiple levels, the Test Analyst must understand these interactions and be able to create the conditions that will exercise the various interactions. For example, if two systems will exchange data, the Test Analyst must be able to create the necessary data and the transactions required to perform the data exchange. It is important to remember that all interactions may not be clearly specified in the requirements documents. Instead, many of these interactions will be defined only in the system architecture and design documents. The Test Analyst must be able and prepared to examine those documents to determine the points of information exchange between systems and between the system and its environment to ensure all are tested. Techniques such as decision tables, state transition diagrams, use cases and combinatoric testing are all applicable to interoperability testing. Typical defects found include incorrect data exchange between interacting components.

Usability Testing

It is important to understand why users might have difficulty using the system. To gain this understanding it is first necessary to appreciate that the term “user” may apply to a wide range of different types of persons, ranging from IT experts to children to people with disabilities.

Some national institutions (e.g., the British Royal National Institute for the Blind), recommend that web pages be accessible for disabled, blind, partially sighted, mobility impaired, deaf and cognitively-disabled users. Checking that applications and web sites are usable for the above users may also improve the usability for everyone else. Accessibility is discussed more below.

Usability testing tests the ease by which users can use or learn to use the system to reach a specified goal in a specific context. Usability testing is directed at measuring the following:

  • Effectiveness – capability of the software product to enable users to achieve specified goals with accuracy and completeness in a specified context of use
  • Efficiency – capability of the product to enable users to expend appropriate amounts of resources in relation to the effectiveness achieved in a specified context of use
  • Satisfaction – capability of the software product to satisfy users in a specified context of use

Attributes that may be measured include:

  • Understandability – attributes of the software that affect the effort required by the user to recognize the logical concept and its applicability
  • Learnability – attributes of software that affect the effort required by the user to learn the application
  • Operability – attributes of the software that affect the effort required by the user to conduct tasks effectively and efficiently
  • Attractiveness – the capability of the software to be liked by the user

Usability testing is usually conducted in two steps:

  • Formative Usability Testing – testing that is conducted iteratively during the design and prototyping stages to help guide (or “form”) the design by identifying usability design defects Summative Usability Testing – testing that is conducted after implementation to measure the usability and identify problems with a completed component or system

Usability tester skills should include expertise or knowledge in the following areas:

  • Sociology
  • Psychology
  • Conformance to national standards (including accessibility standards)
  • Ergonomics

Conducting Usability Tests

Validation of the actual implementation should be done under conditions as close as possible to those under which the system will be used. This may involve setting up a usability lab with video cameras, mock up offices, review panels, users, etc., so that development staff can observe the effect of the actual system on real people. Formal usability testing often requires some amount of preparing the “users” (these could be real users or user representatives) either by providing set scripts or instructions for them to follow. Other free form tests allow the user to experiment with the software so the observers can determine how easy or difficult it is for the user to figure out how to accomplish their tasks.

Many usability tests may be executed by the Test Analyst as part of other tests, for example during functional system test. To achieve a consistent approach to the detection and reporting of usability defects in all stages of the life cycle, usability guidelines may be helpful. Without usability guidelines, it may be difficult to determine what is “unacceptable” usability. For example, is it unreasonable for a user to have to make 10 mouse clicks to log into an application? Without specific guidelines, the Test Analyst can be in the difficult position of defending defect reports that the developer wants to close because the software works “as designed”. It is very important to have the verifiable usability specifications defined in the requirements as well as to have a set of usability guidelines that are applied to all similar projects. The guidelines should include such items as accessibility of instructions, clarity of prompts, number of clicks to complete an activity, error messaging, processing indicators (some type of indicator for the user that the system is processing and cannot accept further inputs at the time), screen layout guidelines, use of colors and sounds and other factors that affect the user’s experience.

Usability Test Specification

Principal techniques for usability testing are:

  • Inspecting, evaluating or reviewing
  • Dynamically interacting with prototypes
  • Verifying and validating the actual implementation
  • Conducting surveys and questionnaires

Inspecting, evaluating or reviewing

Inspection or review of the requirements specification and designs from a usability perspective that increase the user’s level of involvement can be cost effective by finding problems early. Heuristic evaluation (systematic inspection of a user interface design for usability) can be used to find the usability problems in the design so that they can be attended to as part of an iterative design process. This involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the “heuristics”). Reviews are more effective when the user interface is more visible. For example, sample screen shots are usually easier to understand and interpret than a narrative description of the functionality provided by a particular screen. Visualization is important for an adequate usability review of the documentation.

Dynamically interacting with prototypes

When prototypes are developed, the Test Analyst should work with the prototypes and help the developers evolve the prototype by incorporating user feedback into the design. In this way, prototypes can be refined and the user can get a more realistic view of how the finished product will look and feel.

Verifying and validating the actual implementation

Where the requirements specify usability characteristics for the software (e.g., the number of mouse clicks to accomplish a specific goal), test cases should be created to verify that the software implementation has included these characteristics.

For performing validation of the actual implementation, tests specified for functional system test may be developed as usability test scenarios. These test scenarios measure specific usability characteristics, such as learnability or operability, rather than functional outcomes.

Test scenarios for usability may be developed to specifically test syntax and semantics. Syntax is the structure or grammar of the interface (e.g., what can be entered in an input field) whereas semantics describes the meaning and purpose (e.g., reasonable and meaningful system messages and output provided to the user) of the interface.

Black box techniques, particularly use cases which can be defined in plain text or with UML (Unified Modeling Language), are sometimes employed in usability testing.

Test scenarios for usability testing also need to include user instructions, allocation of time for pre- and post-test interviews for giving instructions and receiving feedback and an agreed protocol for conducting the sessions. This protocol includes a description of how the test will be carried out, timings, note taking and session logging, and the interview and survey methods to be used.

Conducting surveys and questionnaires

Survey and questionnaire techniques may be applied to gather observations and feedback regarding user behavior with the system. Standardized and publicly available surveys such as SUMI (Software Usability Measurement Inventory) and WAMMI (Website Analysis and MeasureMent Inventory) permit benchmarking against a database of previous usability measurements. In addition, since SUMI provides concrete measurements of usability, this can provide a set of completion / acceptance criteria.

Accessibility Testing

It is important to consider the accessibility to software for those with particular needs or restrictions for its use. This includes those with disabilities. Accessibility testing should consider the relevant standards, such as the Web Content Accessibility Guidelines, and legislation, such as Disability Discrimination Acts (UK, Australia) and Section 508 (US). Accessibility, similar to usability, must be considered during the design phases. Testing often occurs during the integration levels and continues through system testing and into the acceptance testing levels. Defects are usually determined when the software fails to meet the designated regulations or standards defined for the software.