Test Planning

Test Planning
For each test level, test planning starts at the initiation of the test process for that level and continues throughout the project until the completion of closure activities for that level. It involves the identification of the activities and resources required to meet the mission and objectives identified in the test strategy. Test planning also includes identifying the methods for gathering and tracking the metrics that are used to guide the project, determine adherence to plan, and assess the achievement of the objectives. By determining useful metrics during the planning stages, tools can be selected, training can be scheduled and documentation guidelines can be established.

The strategy (or strategies) selected for the testing project help to determine the tasks that occur during the planning stages. For example, when using the risk-based testing strategy, risk analysis is used to guide the test planning process regarding the mitigating activities required to reduce the identified product risks and to help with contingency planning. If a number of likely and serious potential defects related to security are identified, a significant amount of effort is spent developing and executing security tests. Likewise, if it is identified that serious defects are usually found in the design specification, the test planning process could result in additional static testing (reviews) of the design specification.

Risk information is also used to determine the priorities of the various testing activities. For example, where system performance is a high risk, performance testing is conducted as soon as integrated code is available. Similarly, if a reactive strategy is to be employed, planning for the creation of test charters and tools for dynamic testing techniques such as exploratory testing is warranted.

In addition, the test planning stage is where the approach to testing is clearly defined by our Test Manager, including which test levels are employed, the goals and objectives of each level, and what test techniques are used at each level of testing. For example, in risk-based testing of certain avionics systems, a risk assessment prescribes what level of code coverage is required and thereby which testing techniques are used.

Complex relationships may exist between the test basis (e.g., specific requirements or risks), test conditions, and the tests that cover them. Many-to-many relationships often exist between these work products. These are understood to enable effective implementation of test planning, monitoring, and control. Tool decisions could also depend on the understanding of the relationships between the work products.

Relationships may also exist between work products produced by the development team and our testing team. For example, the traceability matrix may need to track the relationships between the detailed design specification elements from the system designers, the business requirements from the business analysts, and the test work products defined by our testing team. If low-level test cases are to be designed and used, there is a requirement defined in the planning stages that the detailed design documents from the development team are to be approved before test case creation can start. When following an Agile lifecycle, informal transfer-of-information sessions are used to convey information between teams prior to the start of testing.

The test plan may also list the specific features of the software that are within its scope (based on risk analysis, if appropriate), as well as explicitly identifying features that are not within its scope. Depending on the levels of formality and documentation appropriate to the project, each feature that is within scope is associated with a corresponding test design specification.

There is also a requirement at this stage for our Test Manager to work with the project architects to define the initial test environment specification, to verify availability of the resources required, to ensure that the people who are configuring the environment are committed to do so and to understand cost/delivery timescales and the work required to complete and deliver the test environment.

Finally, all external dependencies and associated service level agreements (SLAs) are identified and, if required, initial contact is made. Examples of dependencies are resource requests to outside groups, dependencies on other projects (if working within a program), external vendors or development partners, the deployment team, and database administrators.