Performance Testing

Performance Testing

Software Acceptance Testing


The ISO 9126 classification of product quality characteristics includes performance (time behavior) as a sub-characteristic of efficiency. Performance testing focuses on the ability of a component or system to respond to user or system inputs within a specified time and under specified conditions.

Performance measurements vary according to the objectives of the test. For individual software components, performance may be measured according to CPU cycles, while for client-based systems performance may be measured according to the time taken to respond to a particular user request. For systems whose architectures consist of several components (e.g., clients, servers, databases) performance measurements are taken for transactions between individual components so that performance “bottlenecks” can be identified.

Types of Performance Testing

Load Testing

Load testing focuses on the ability of a system to handle increasing levels of anticipated realistic loads resulting from the transaction requests generated by numbers of concurrent users or processes. Average response times for users under different scenarios of typical use (operational profiles) can be measured and analyzed.

Stress Testing

Stress testing focuses on the ability of a system or component to handle peak loads at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as accessible computer capacity and available bandwidth. Performance levels should degrade slowly and predictably without failure as stress levels are increased. In particular, the functional integrity of the system should be tested while the system is under stress in order to find possible faults in functional processing or data inconsistencies.

One possible objective of stress testing is to discover the limits at which a system actually fails so that the “weakest link in the chain” can be determined. Stress testing allows additional capacity to be added to the system in a timely manner (e.g., memory, CPU capability, database storage).

Scalability Testing

Scalability testing focuses on the ability of a system to meet future efficiency requirements, which may be beyond those currently required. The objective of the tests is to determine the system’s ability to grow (e.g., with more users, larger amounts of data stored) without exceeding the currently specified performance requirements or failing. Once the limits of scalability are known, threshold values can be set and monitored in production to provide a warning of impending problems. In addition the production environment may be adjusted with appropriate amounts of hardware.

Performance Test Planning

In addition to the general planning issues, the following factors can influence the planning of performance tests:

  • Depending on the test environment used and the software being tested, performance tests may require the entire system to be implemented before effective testing can be done. In this case, performance testing is usually scheduled to occur during system test. Other performance tests which can be conducted effectively at the component level may be scheduled during unit testing.
  • In general it is desirable to conduct initial performance tests as early as possible, even if a production-like is not yet available. These early tests may find performance problems (e.g. bottlenecks) and reduce project risk by avoiding time-consuming corrections in the later stages of software development or production.
  • Code reviews, in particular those which focus on database interaction, component interaction and error handling, can identify performance issues (particularly regarding “wait and retry” logic and inefficient queries) and should be scheduled early in the software lifecycle.
  • The hardware, software and network bandwidth needed to run the performance tests should be planned and budgeted. Needs depend primarily on the load to be generated, which may be based on the number of virtual users to be simulated and the amount of network traffic they are likely to generate. Failure to account for this may result in unrepresentative performance measurements being taken. For example, verifying the scalability requirements of a much- visited Internet site may require the simulation of hundreds of thousands of virtual users. Generating the required load for performance tests may have a significant influence on hardware and tool acquisition costs. This must be considered in the planning of performance tests to ensure that adequate funding is available.
  • The costs of generating the load for performance tests may be minimized by renting the required test infrastructure. This may involve, for example, renting “top-up” licenses for performance tools or by using the services of a third-party provider for meeting hardware needs (e.g., cloud-services). If this approach is taken, the available time for conducting the performance tests may be limited and must therefore be carefully planned.
  • Care should be taken at the planning stage to ensure that the performance tool to be used provides the required compatibility with the communications protocols used by the system under test.
  • Performance-related defects often have significant impact on the system under test. When performance requirements are imperative, it is often useful to conduct performance tests on the critical components (via drivers and stubs) instead of waiting for system tests.

Performance Test Specification

The specification of tests for different performance test types such as load and stress are based on the definition of operational profiles. These represent distinct forms of user behavior when interacting with an application. There may be multiple operational profiles for a given application.