Installability testing is conducted on the software and written procedures used to install the software on its target environment. This may include, for example, the software developed to install an operating system onto a processor, or an installation “wizard” used to install a product onto a client PC.
Typical installability testing objectives include:
- Validating that the software can be successfully installed by following the instructions in an installation manual (including the execution of any installation scripts), or by using an installation wizard. This includes exercising installation options for different hardware/software configurations and for various degrees of installation (e.g., initial or update).
- Testing whether failures which occur during installation (e.g., failure to load particular DLLs) are dealt with by the installation software correctly without leaving the system in an undefined state (e.g., partially installed software or incorrect system configurations)
- Testing whether a partial installation/de-installation can be completed
- Testing whether an installation wizard can successfully identify invalid hardware platforms or operating system configurations
- Measuring whether the installation process can be completed within a specified number of minutes or in less than a specified number of steps
- Validating that the software can be successfully downgraded or uninstalled
Functionality testing is normally conducted after the installation test to detect any faults which may have been introduced by the installation (e.g., incorrect configurations, functions not available). Usability testing is normally conducted in parallel with installability testing (e.g., to validate that users are provided with understandable instructions and feedback/error messages during the installation).
Computer systems which are not related to each other are said to be compatible when they can run in the same environment (e.g., on the same hardware) without affecting each other’s behavior (e.g., resource conflicts). Compatibility should be performed when new or upgraded software will be rolled out into environments which already contain installed applications.
Compatibility problems may arise when the application is tested in an environment where it is the only installed application (where incompatibility issues are not detectable) and then deployed onto another environment (e.g., production) which also runs other applications.
Typical compatibility testing objectives include:
- Evaluation of possible adverse impact on functionality when applications are loaded in the same environment (e.g., conflicting resource usage when a server runs multiple applications)
- Evaluation of the impact to any application resulting from the deployment of operating system fixes and upgrades
Compatibility issues should be analyzed when planning the targeted production environment but the actual tests are normally performed after system and user acceptance testing have been successfully completed.
Adaptability testing checks whether a given application can function correctly in all intended target environments (hardware, software, middleware, operating system, etc.). An adaptive system is therefore an open system that is able to fit its behavior according to changes in its environment or in parts of the system itself. Specifying tests for adaptability requires that combinations of the intended target environments are identified, configured and available to the testing team. These environments are then tested using a selection of functional test cases which exercise the various components present in the environment.
Adaptability may relate to the ability of the software to be ported to various specified environments by performing a predefined procedure. Tests may evaluate this procedure.
Adaptability tests may be performed in conjunction with installability tests and are typically followed by functional tests to detect any faults which may have been introduced in adapting the software to a different environment.
Replaceability testing focuses on the ability of software components within a system to be exchanged for others. This may be particularly relevant for systems which use commercial off-the-shelf (COTS) software for specific system components.
Replaceability tests may be performed in parallel with functional integration tests where more than one alternative component is available for integration into the complete system. Replaceability may be evaluated by technical review or inspection at the architecture and design levels, where the emphasis is placed on the clear definition of interfaces to potential replaceable components.