Test Strategy

Test Strategy

The test strategy describes our general test methodology. This includes the way in which testing is used to manage product and project risks, the division of testing into levels, and the high-level activities associated with testing. (We have different strategies for different situations, such as different software development lifecycles, different levels of risk, or different regulatory requirements.) The test strategy, and the processes and activities described in it, will be consistent with the test policy. It will provide the generic test entry and exit criteria for us or for one or more programs.

As mentioned above, test strategies describe general test methodologies, which typically include:

  • Analytical strategies, such as risk-based testing, where our test team analyzes the test basis to identify the test conditions to cover. For example, in requirements-based testing, test analysis derives test conditions from the requirements, tests are then designed and implemented to cover those conditions. The tests are subsequently executed, often using the priority of the requirement covered by each test to determine the order in which the tests will be run. Test results are reported in terms of requirements status, e.g., requirement tested and passed, requirement tested and failed, requirement not yet fully tested, requirement testing blocked, etc.
  • Model-based strategies, such as operational profiling, where our test team develops a model (based on actual or anticipated situations) of the environment in which the system exists, the inputs and conditions to which the system is subjected, and how the system should behave. For example, in model-based performance testing of a fast-growing mobile device application, one might develop models of incoming and outgoing network traffic, active and inactive users, and resulting processing load, based on current usage and project growth over time. In addition, models might be developed considering the current production environment’s hardware, software, data capacity, network, and infrastructure. Models may also be developed for ideal, expected, and minimum throughput rates, response times, and resource allocation.
  • Methodical strategies, such as quality characteristic-based, where our test team uses a predetermined set of test conditions, such as a quality standard (e.g., ISO 25000 which is replacing ISO 9126), a checklist or a collection of generalized, logical test conditions which may relate to a particular domain, application or type of testing (e.g., security testing), and uses that set of test conditions from one iteration to the next or from one release to the next. For example, in maintenance testing a simple, stable e-commerce website, our testers might use a checklist that identifies the key functions, attributes, and links for each page. Our testers will cover the relevant elements of this checklist each time a modification is made to the site.
  • Process- or standard-compliant strategies, such as medical systems subject to U.S. Food and Drug Administration standards, where our test team follows a set of processes defined by a standards committee or other panel of experts, where the processes address documentation, the proper identification and use of the test basis and test oracle(s), and the organization of our test team. For example, in projects following Scrum Agile management techniques, in each iteration our testers analyze user stories that describe particular features, estimate the test effort for each feature as part of the planning process for the iteration, identify test conditions (often called acceptance criteria) for each user story, execute tests that cover those conditions, and report the status of each user story (untested, failing, or passing) during test execution.
  • Reactive strategies, such as using defect-based attacks, where our test team waits to design and implement tests until the software is received, reacting to the actual system under test. For example, when using exploratory testing on a menu-based application, a set of test charters corresponding to the features, menu selections, and screens might be developed. Each tester is assigned a set of test charters, which they then use to structure their exploratory testing sessions. Our Testers periodically report results of the testing sessions to our Test Manager, who may revise the charters based on the findings.
  • Consultative strategies, such as user-directed testing, where our test team relies on the input of one or more key stakeholders to determine the test conditions to cover. For example, in outsourced compatibility testing for a web-based application, a company may give the outsourced testing service provider a prioritized list of browser versions, anti-malware software, operating systems, connection types, and other configuration options that they want evaluated against their application. The testing service provider can then use techniques such as pairwise testing (for high priority options) and equivalence partitioning (for lower priority options) to generate the tests.
  • Regression-averse testing strategies, such as extensive automation, where our test team uses various techniques to manage the risk of regression, especially functional and/or non- functional regression test automation at one or more levels. For example, if regression testing a web-based application, our testers can use a GUI-based test automation tool to automate the typical and exception use cases for the application. Those tests are then executed any time the application is modified.

Different strategies may be combined. The specific strategies selected will be appropriate to the organization’s needs and means, and we will tailor strategies to fit particular operations and projects.

The test strategy may describe the test levels to be carried out. In such cases, it should give guidance on the entry criteria and exit criteria for each level and the relationships among the levels (e.g., division of test coverage objectives).

The test strategy may also describe the following:

  • Integration procedures
  • Test specification techniques
  • Independence of testing (which may vary depending on level)
  • Mandatory and optional standards
  • Test environments
  • Test automation
  • Test tools
  • Reusability of software work products and test work products
  • Confirmation testing (re-testing) and regression testing
  • Test control and reporting
  • Test measurements and metrics
  • Defect management
  • Configuration management approach for testware
  • Roles and responsibilities

Different test strategies for the short term and the long term might be necessary. Different test strategies are suitable for different organizations and projects. For example, where security- or safety-critical applications are involved, a more intensive strategy could be more appropriate than in other cases. In addition, the test strategy also differs for the various development models.