Test Techniques

Test Techniques

Learning Objectives for Test Techniques

Specification-Based Techniques

  1. Explain the use of cause-effects graphs
  2. Write test cases from a given specification item by applying the equivalence partitioning test design technique to achieve a defined level of coverage
  3. Write test cases from a given specification item by applying the boundary value analysis test design technique to achieve a defined level of coverage
  4. Write test cases from a given specification item by applying the decision table test design technique to achieve a defined level of coverage
  5. Write test cases from a given specification item by applying the state transition test design technique to achieve a defined level of coverage
  6. Write test cases from a given specification item by applying the pairwise test design technique to achieve a defined level of coverage
  7. Write test cases from a given specification item by applying the classification tree test design technique to achieve a defined level of coverage
  8. Write test cases from a given specification item by applying the use case test design technique to achieve a defined level of coverage
  9. Explain how user stories are used to guide testing in an Agile project
  10. Write test cases from a given specification item by applying the domain analysis test design technique to achieve a defined level of coverage
  11. Analyze a system, or its requirement specification, in order to determine likely types of defects to be found and select the appropriate specification-based technique(s)

Defect-Based Techniques

Describe the application of defect-based testing techniques and differentiate their use from specification-based techniques

Analyze a given defect taxonomy for applicability in a given situation using criteria for a good taxonomy

Experience-Based Techniques

Explain the principles of experience-based techniques, and the benefits and drawbacks compared to specification-based and defect-based techniques
For a given scenario, specify exploratory tests and explain how the results can be reported

For a given project situation, determine which specification-based, defect-based or experience-based techniques should be applied to achieve specific goals

Introduction

The test design techniques considered in this chapter are divided into the following categories:

  • Specification-based (or behavior-based or black box)
  • Defect-based
  • Experience-based

These techniques are complementary and may be used as appropriate for any given test activity, regardless of which level of testing is being performed.

Note that all three categories of techniques can be used to test both functional or non-functional quality characteristics. Testing non-functional characteristics is discussed in the next chapter.

The test design techniques discussed in these sections may focus primarily on determining optimal test data (e.g., equivalence partitions) or deriving test sequences (e.g., state models). It is common to combine techniques to create complete test cases.

Specification-Based Techniques

Specification-based techniques are applied to the test conditions to derive test cases based on an analysis of the test basis for a component or system without reference to its internal structure.

Common features of specification-based techniques include:

  • Models, e.g., state transition diagrams and decision tables, are created during test design according to the test technique
  • Test conditions are derived systematically from these models

Some techniques also provide coverage criteria, which can be used for measuring test design and test execution activities. Completely fulfilling the coverage criteria does not mean that the entire set of tests is complete, but rather that the model no longer suggests any additional tests to increase coverage based on that technique.

Specification-based tests are usually based on the system requirements documents. Since the requirements specification should specify how the system is to behave, particularly in the area of functionality, deriving tests from the requirements is often part of testing the behavior of the system. In some cases there may be no documented requirements but there are implied requirements such as replacing the functionality of a legacy system.

There are a number of specification-based testing techniques. These techniques target different types of software and scenarios. The sections below show the applicability for each technique, some limitations and difficulties that the Test Analyst may experience, the method by which test coverage is measured and the types of defects that are targeted.

Equivalence Partitioning

Equivalence partitioning (EP) is used to reduce the number of test cases that is required to effectively test the handling of inputs, outputs, internal values and time-related values. Partitioning is used to create equivalence classes (often called equivalence partitions) which are created of sets of values that are processed in the same manner. By selecting one representative value from a partition, coverage for all the items in the same partition is assumed.

Applicability

This technique is applicable at any level of testing and is appropriate when all the members of a set of values to be tested are expected to be handled in the same way and where the sets of values used by the application do not interact. The selection of sets of values is applicable to valid and invalid partitions (i.e., partitions containing values that should be considered invalid for the software being tested). This technique is strongest when used in combination with boundary value analysis which expands the test values to include those on the edges of the partitions. This is a commonly used technique for smoke testing a new build or a new release as it quickly determines if basic functionality is working.

Limitations/Difficulties

If the assumption is incorrect and the values in the partition are not handled in exactly the same way, this technique may miss defects. It is also important to select the partitions carefully. For example, an input field that accepts positive and negative numbers would be better tested as two valid partitions, one for the positive numbers and one for the negative numbers, because of the likelihood of different handling. Depending on whether or not zero is allowed, this could become another partition as well. It is important for the Test Analyst to understand the underlying processing in order to determine the best partitioning of the values.

Coverage

Coverage is determined by taking the number of partitions for which a value has been tested and dividing that number by the number of partitions that have been identified. Using multiple values for a single partition does not increase the coverage percentage.

Types of Defects

This technique finds functional defects in the handling of various data values.

Boundary Value Analysis

Boundary value analysis (BVA) is used to test the values that exist on the boundaries of ordered equivalence partitions. There are two ways to approach BVA: two value or three value testing. With two value testing, the boundary value (on the boundary) and the value that is just over the boundary (by the smallest possible increment) are used. For example, if the partition included the values 1 to 10 in increments of 0.5, the two value test values for the upper boundary would be 10 and 10.5. The lower boundary test values would be 1 and 0.5. The boundaries are defined by the maximum and minimum values in the defined equivalence partition.

For three value boundary testing, the values before, on and over the boundary are used. In the previous example, the upper boundary tests would include 9.5, 10 and 10.5. The lower boundary tests would include 1.5, 1 and 0.5. The decision regarding whether to use two or three boundary values should be based on the risk associated with the item being tested, with the three boundary approach being used for the higher risk items.

Applicability

This technique is applicable at any level of testing and is appropriate when ordered equivalence partitions exist. Ordering is required because of the concept of being on and over the boundary. For example, a range of numbers is an ordered partition. A partition that consists of all rectangular objects is not an ordered partition and does not have boundary values. In addition to number ranges, boundary value analysis can be applied to the following:

  • Numeric attributes of non-numeric variables (e.g., length) Loops, including those in use cases
  • Stored data structures
  • Physical objects (including memory)
  • Time-determined activities

Limitations/Difficulties

Because the accuracy of this technique depends on the accurate identification of the equivalence partitions, it is subject to the same limitations and difficulties. The Test Analyst should also be aware of the increments in the valid and invalid values to be able to accurately determine the values to be tested. Only ordered partitions can be used for boundary value analysis but this is not limited to a range of valid inputs. For example, when testing for the number of cells supported by a spreadsheet, there is a partition that contains the number of cells up to and including the maximum allowed cells (the boundary) and another partition that begins with one cell over the maximum (over the boundary).

Coverage

Coverage is determined by taking the number of boundary conditions that are tested and dividing that by the number of identified boundary conditions (either using the two value or three value method). This will provide the coverage percentage for the boundary testing.

Types of Defects

Boundary value analysis reliably finds displacement or omission of boundaries, and may find cases of extra boundaries. This technique finds defects regarding the handling of the boundary values, particularly errors with less-than and greater-than logic (i.e., displacement). It can also be used to find non-functional defects, for example tolerance of load limits (e.g., system supports 10,000 concurrent users).

Decision Tables

Decision tables are used to test the interaction between combinations of conditions. Decision tables provide a clear method to verify testing of all pertinent combinations of conditions and to verify that all possible combinations are handled by the software under test. The goal of decision table testing is to ensure that every combination of conditions, relationships and constraints is tested. When trying to test every possible combination, decision tables can become very large. A method of intelligently reducing the number of combinations from all possible to those which are “interesting” is called collapsed decision table testing. When this technique is used, the combinations are reduced to those that will produce differing outputs by removing sets of conditions that are not relevant for the outcome. Redundant tests or tests in which the combination of conditions is not possible are removed. The decision whether to use full decision tables or collapsed decision tables is usually risk-based.

Applicability

This technique is commonly applied for the integration, system and acceptance test levels. Depending on the code, it may also be applicable for component testing when a component is responsible for a set of decision logic. This technique is particularly useful when the requirements are presented in the form of flow charts or tables of business rules. Decision tables are also a requirements definition technique and some requirements specifications may arrive already in this format. Even when the requirements are not presented in a tabular or flow-charted form, condition combinations are usually found in the narrative. When designing decision tables, it is important to consider the defined condition combinations as well as those that are not expressly defined but will exist. In order to design a valid decision table, the tester must be able to derive all expected outcomes for all condition combinations from the specification or test oracle. Only when all interacting conditions are considered can the decision table be used as a good test design tool.

Limitations/Difficulties

Finding all the interacting conditions can be challenging, particularly when requirements are not well- defined or do not exist. It is not unusual to prepare a set of conditions and determine that the expected result is unknown.

Coverage

Minimum test coverage for a decision table is to have one test case for each column. This assumes that there are no compound conditions and that all possible condition combinations have been recorded in a column. When determining tests from a decision table, it is also important to consider any boundary conditions that should be tested. These boundary conditions may result in an increase in the number of test cases needed to adequately test the software. Boundary value analysis and equivalence partitioning are complementary to the decision table technique.

Types of Defects

Typical defects include incorrect processing based on particular combinations of conditions resulting in unexpected results. During the creation of the decision tables, defects may be found in the specification document. The most common types of defects are omissions (there is no information regarding what should actually happen in a certain situation) and contradictions. Testing may also find issues with condition combinations that are not handled or are not handled well.

Cause-Effect Graphing

Cause-effect graphs may be generated from any source which describes the functional logic (i.e., the “rules”) of a program, such as user stories or flow charts. They can be useful to gain a graphical overview of a program’s logical structure and are typically used as the basis for creating decision tables. Capturing decisions as cause-effect graphs and/or decision tables enables systematic test coverage of the program’s logic to be achieved.

Applicability

Cause-effect graphs apply in the same situations as decision tables and also apply to the same testing levels. In particular, a cause-effect graph shows condition combinations that cause results (causality), condition combinations that exclude results (not), multiple conditions that must be true to cause a result (and) and alternative conditions that can be true to cause a particular result (or). These relationships can be easier to see in a cause-effect graph than in a narrative description.

Limitations/Difficulties

Cause-effect graphing requires additional time and effort to learn compared to other test design techniques. It also requires tool support. Cause-effect graphs have a particular notation that must be understood by the creator and reader of the graph.

Coverage

Each possible cause to effect line must be tested, including the combination conditions, to achieve minimum coverage. Cause-effect graphs include a means to define constraints on the data and constraints on the flow of logic.

Types of Defects

These graphs find the same types of combinatorial defects as are found with decision tables. In addition, the creation of the graphs helps define the required level of detail in the test basis, and so helps improve the detail and quality of the test basis and helps the tester identify missing requirements.

State Transition Testing

State transition testing is used to test the ability of the software to enter into and exit from defined states via valid and invalid transitions. Events cause the software to transition from state to state and to perform actions. Events may be qualified by conditions (sometimes called guard conditions or transition guards) which influence the transition path to be taken. For example, a login event with a valid username/password combination will result in a different transition than a login event with an invalid password.

State transitions are tracked in either a state transition diagram that shows all the valid transitions between states in a graphical format or a state table which shows all potential transitions, both valid and invalid.

Applicability
State transition testing is applicable for any software that has defined states and has events that will cause the transitions between those states (e.g., changing screens). State transition testing can be used at any level of testing. Embedded software, web software, and any type of transactional software are good candidates for this type of testing. Control systems, i.e., traffic light controllers, are also good candidates for this type of testing.

Limitations/Difficulties

Determining the states is often the most difficult part of defining the state table or diagram. When the software has a user interface, the various screens that are displayed for the user are often used to define the states. For embedded software, the states may be dependent upon the states that the hardware will experience.

Besides the states themselves, the basic unit of state transition testing is the individual transition, also known as a 0-switch. Simply testing all transitions will find some kinds of state transition defects, but more may be found by testing sequences of transactions. A sequence of two successive transitions is called a 1-switch; a sequence of three successive transitions is a 2-switch, and so forth. (These switches are sometimes alternatively designated as N-1 switches, where N represents the number of transitions that will be traversed. A single transition, for instance (a 0-switch), would be a 1-1 switch.

Coverage

As with other types of test techniques, there is a hierarchy of levels of test coverage. The minimum acceptable degree of coverage is to have visited every state and traversed every transition. 100% transition coverage (also known as 100% 0-switch coverage or 100% logical branch coverage) will guarantee that every state is visited and every transition is traversed, unless the system design or the state transition model (diagram or table) are defective. Depending on the relationships between states and transitions, it may be necessary to traverse some transitions more than once in order to execute other transitions a single time.

The term “n-switch coverage” relates to the number of transitions covered. For example, achieving 100% 1-switch coverage requires that every valid sequence of two successive transitions has been tested at least once. This testing may stimulate some types of failures that 100% 0-switch coverage would miss.

“Round-trip coverage” applies to situations in which sequences of transitions form loops. 100% round- trip coverage is achieved when all loops from any state back to the same state have been tested. This must be tested for all states that are included in loops.

For any of these approaches, a still higher degree of coverage will attempt to include all invalid transitions. Coverage requirements and covering sets for state transition testing must identify whether invalid transitions are included.

Types of Defects

Typical defects include incorrect processing in the current state that is a result of the processing that occurred in a previous state, incorrect or unsupported transitions, states with no exits and the need for states or transitions that do not exist. During the creation of the state machine model, defects may be found in the specification document. The most common types of defects are omissions (there is no information regarding what should actually happen in a certain situation) and contradictions.

Combinatorial Testing Techniques

Combinatorial testing is used when testing software with several parameters, each one with several values, which gives rise to more combinations than are feasible to test in the time allowed. The parameters must be independent and compatible in the sense that any option for any factor can be combined with any option for any other factor. Classification trees allow for some combinations to be excluded, if certain options are incompatible. This does not assume that the combined factors won’t affect each other; they very well might, but should affect each other in acceptable ways.

Combinatorial testing provides a means to identify a suitable subset of these combinations to achieve a predetermined level of coverage. The number of items to include in the combinations can be selected by the Test Analyst, including single items, pairs, triples or more. There are a number of tools available to aid the Test Analyst in this task (see www.pairwise.org for samples). These tools either require the parameters and their values to be listed (pairwise testing and orthogonal array testing) or to be represented in a graphical form (classification trees). Pairwise testing is a method applied to testing pairs of variables in combination. Orthogonal arrays are predefined, mathematically accurate tables that allow the Test Analyst to substitute the items to be tested for the variables in the array, producing a set of combinations that will achieve a level of coverage when tested. Classification tree tools allow the Test Analyst to define the size of combinations to be tested (i.e., combinations of two values, three values, etc.).

Applicability

The problem of having too many combinations of parameter values manifests in at least two different situations related to testing. Some test cases contain several parameters each with a number of possible values, for instance a screen with several input fields. In this case, combinations of parameter values make up the input data for the test cases. Furthermore, some systems may be configurable in a number of dimensions, resulting in a potentially large configuration space. In both these situations, combinatorial testing can be used to identify a subset of combinations, feasible in size.

For parameters with a large number of values, equivalence class partitioning, or some other selection mechanism may first be applied to each parameter individually to reduce the number of values for each parameter, before combinatorial testing is applied to reduce the set of resulting combinations.

These techniques are usually applied to the integration, system and system integration levels of testing.

Limitations/Difficulties

The major limitation with these techniques is the assumption that the results of a few tests are representative of all tests and that those few tests represent expected usage. If there is an unexpected interaction between certain variables, it may go undetected with this type of testing if that particular combination is not tested. These techniques can be difficult to explain to a non-technical audience as they may not understand the logical reduction of tests.

Identifying the parameters and their respective values is sometimes difficult. Finding a minimal set of combinations to satisfy a certain level of coverage is difficult to do manually. Tools usually are used to find the minimum set of combinations. Some tools support the ability to force some (sub-) combinations to be included in or excluded from the final selection of combinations. This capability may be used by the Test Analyst to emphasize or de-emphasize factors based on domain knowledge or product usage information.

Coverage

There are several levels of coverage. The lowest level of coverage is called 1-wise or singleton coverage. It requires each value of every parameter be present in at least one of the chosen combinations. The next level of coverage is called 2-wise or pairwise coverage. It requires every pair of values of any two parameters be included in at least one combination. This idea can be generalized to n-wise coverage, which requires every sub-combination of values of any set of n parameters be included in the set of selected combinations. The higher the n, the more combinations needed to each 100% coverage. Minimum coverage with these techniques is to have one test case for every combination produced by the tool.

Types of Defects

The most common type of defects found with this type of testing is defects related to the combined values of several parameters.

Use Case Testing

Use case testing provides transactional, scenario-based tests that should emulate usage of the system. Use cases are defined in terms of interactions between the actors and the system that accomplish some goal. Actors can be users or external systems.

Applicability

Use case testing is usually applied at the system and acceptance testing levels. It may be used for integration testing depending on the level of integration and even component testing depending on the behavior of the component. Use cases are also often the basis for performance testing because they portray realistic usage of the system. The scenarios described in the use cases may be assigned to virtual users to create a realistic load on the system.

Limitations/Difficulties

In order to be valid, the use cases must convey realistic user transactions. This information should come from a user or a user representative. The value of a use case is reduced if the use case does not accurately reflect activities of the real user. An accurate definition of the various alternate paths (flows) is important for the testing coverage to be thorough. Use cases should be taken as a guideline, but not a complete definition of what should be tested as they may not provide a clear definition of the entire set of requirements. It may also be beneficial to create other models, such as flow charts, from the use case narrative to improve the accuracy of the testing and to verify the use case itself.

Coverage

Minimum coverage of a use case is to have one test case for the main (positive) path, and one test case for each alternate path or flow. The alternate paths include exception and failure paths. Alternate paths are sometimes shown as extensions of the main path. Coverage percentage is determined by taking the number of paths tested and dividing that by the total number of main and alternate paths.

Types of Defects

Defects include mishandling of defined scenarios, missed alternate path handling, incorrect processing of the conditions presented and awkward or incorrect error reporting.

User Story Testing

In some Agile methodologies, such as Scrum, requirements are prepared in the form of user stories which describe small functional units that can be designed, developed, tested and demonstrated in a single iteration. These user stories include a description of the functionality to be implemented, any non-functional criteria, and also include acceptance criteria that must be met for the user story to be considered complete.

Applicability

User stories are used primarily in Agile and similar iterative and incremental environments. They are used for both functional testing and non-functional testing. User stories are used for testing at all levels with the expectation that the developer will demonstrate the functionality implemented for the user story prior to handoff of the code to the team members with the next level of testing tasks (e.g., integration, performance testing).

Limitations/Difficulties

Because stories are little increments of functionality, there may be a requirement to produce drivers and stubs in order to actually test the piece of functionality that is delivered. This usually requires an ability to program and to use tools that will help with the testing such as API testing tools. Creation of the drivers and stubs is usually the responsibility of the developer, although a Technical Test Analyst also may be involved in producing this code and utilizing the API testing tools. If a continuous integration model is used, as is the case in most Agile projects, the need for drivers and stubs is minimized.

Coverage

Minimum coverage of a user story is to verify that each of the specified acceptance criteria has been met.

Types of Defects

Defects are usually functional in that the software fails to provide the specified functionality. Defects are also seen with integration issues of the functionality in the new story with the functionality that already exists. Because stories may be developed independently, performance, interface and error handling issues may be seen. It is important for the Test Analyst to perform both testing of the individual functionality supplied as well as integration testing anytime a new story is released for testing.

Domain Analysis

A domain is a defined set of values. The set may be defined as a range of values of a single variable (a one-dimensional domain, e.g., “men aged over 24 and under 66”), or as ranges of values of interacting variables (a multi-dimensional domain, e.g., “men aged over 24 and under 66 AND with weight over 69 kg and under 90 kg”). Each test case for a multi-dimensional domain must include appropriate values for each variable involved.

Domain analysis of a one-dimensional domain typically uses equivalence partitioning and boundary value analysis. Once the partitions are defined, the Test Analyst selects values from each partition that represent a value that is in the partition (IN), outside the partition (OUT), on the boundary of the partition (ON) and just off the boundary of the partition (OFF). By determining these values, each partition is tested along with its boundary conditions.

With multi-dimensional domains the number of test cases generated by these methods rises exponentially with the number of variables involved, whereas an approach based on domain theory leads to a linear growth. Also, because the formal approach incorporates a theory of defects (a fault model), which equivalence partitioning and boundary value analysis lack, its smaller test set will find defects in multi-dimensional domains that the larger, heuristic test set would likely miss. When dealing with multi-dimensional domains, the test model may be constructed as a decision table (or “domain matrix”). Identifying test case values for multi-dimensional domains above three dimensions is likely to require computational support.

Applicability

Domain analysis combines the techniques used for decision tables, equivalence partitioning and boundary value analysis to create a smaller set of tests that still cover the important areas and the likely areas of failure. It is often applied in cases where decision tables would be unwieldy because of the large number of potentially interacting variables. Domain analysis can be done at any level of testing but is most frequently applied at the integration and system testing levels.

Limitations/Difficulties

Doing thorough domain analysis requires a strong understanding of the software in order to identify the various domains and potential interaction between the domains. If a domain is left unidentified, the testing can be significantly lacking, but it is likely that the domain will be detected because the OFF and OUT variables may land in the undetected domain. Domain analysis is a strong technique to use when working with a developer to define the testing areas.

Coverage

Minimum coverage for domain analysis is to have a test for each IN, OUT, ON and OFF value in each domain. Where there is an overlap of the values (for example, the OUT value of one domain is an IN value in another domain), there is no need to duplicate the tests. Because of this, the actual tests needed are often less than four per domain.

Types of Defects

Defects include functional problems within the domain, boundary value handling, variable interaction issues and error handling (particularly for the values that are not in a valid domain).

Combining Techniques

Sometimes techniques are combined to create test cases. For example, the conditions identified in a decision table might be subjected to equivalence partitioning to discover multiple ways in which a condition might be satisfied. Test cases would then cover not only every combination of conditions, but also, for those conditions which were partitioned, additional test cases would be generated to cover the equivalence partitions. When selecting the particular technique to be applied, the Test Analyst should consider the applicability of the technique, the limitations and difficulties, and the goals of the testing in terms of coverage and defects to be detected. There may not be a single “best” technique for a situation. Combined techniques will often provide the most complete coverage assuming there is sufficient time and skill to correctly apply the techniques.

Defect-Based Techniques

Using Defect-Based Techniques

A defect-based test design technique is one in which the type of defect sought is used as the basis for test design, with tests derived systematically from what is known about the type of defect. Unlike specification-based testing which derives its tests from the specification, defect-based testing derives tests from defect taxonomies (i.e., categorized lists) that may be completely independent from the software being tested. The taxonomies can include lists of defect types, root causes, failure symptoms and other defect-related data. Defect-based testing may also use lists of identified risks and risk scenarios as a basis for targeting testing. This test technique allows the tester to target a specific type of defect or to work systematically through a defect taxonomy of known and common defects of a particular type. The Test Analyst uses the taxonomy data to determine the goal of the testing, which is to find a specific type of defect. From this information, the Test Analyst will create the test cases and test conditions that will cause the defect to manifest itself, if it exists.

Applicability

Defect-based testing can be applied at any testing level but is most commonly applied during system testing. There are standard taxonomies that apply to multiple types of software. This non-product specific type of testing helps to leverage industry standard knowledge to derive the particular tests. By adhering to industry-specific taxonomies, metrics regarding defect occurrence can be tracked across projects and even across organizations.

Limitations/Difficulties

Multiple defect taxonomies exist and may be focused on particular types of testing, such as usability. It is important to pick a taxonomy that is applicable to the software being tested, if any are available. For example, there may not be any taxonomies available for innovative software. Some organizations have compiled their own taxonomies of likely or frequently seen defects. Whatever taxonomy is used, it is important to define the expected coverage prior to starting the testing.

Coverage

The technique provides coverage criteria which are used to determine when all the useful test cases have been identified. As a practical matter, the coverage criteria for defect-based techniques tend to be less systematic than for specification-based techniques in that only general rules for coverage are given and the specific decision about what constitutes the limit of useful coverage is discretionary. As with other techniques, the coverage criteria do not mean that the entire set of tests is complete, but rather that defects being considered no longer suggest any useful tests based on that technique.

Types of Defects

The types of defects discovered usually depend on the taxonomy in use. If a user interface taxonomy is used, the majority of the discovered defects would likely be user interface related, but other defects can be discovered as a byproduct of the specific testing.

Defect Taxonomies

Defect taxonomies are categorized lists of defect types. These lists can be very general and used to serve as high-level guidelines or can be very specific. For example, a taxonomy for user interface defects could contain general items such as functionality, error handling, graphics display and performance. A detailed taxonomy could include a list of all possible user interface objects (particularly for a graphical user interface) and could designate the improper handling of these objects, such as:

  • Text field
  •  Valid data is not accepted
  •  Invalid data is accepted
  •  Length of input is not verified
  •  Special characters are not detected
  •  User error messages are not informative o User is not able to correct erroneous data o Rules are not applied

Date field

  •  Valid dates are not accepted
  •  Invalid dates are not rejected
  •  Date ranges are not verified
  •  Precision data is not handled correctly (e.g., hh:mm:ss)
  •  User is not able to correct erroneous data
  •  Rules are not applied (e.g., ending date must be greater than starting date)

There are many defect taxonomies available, ranging from formal taxonomies that can be purchased to those designed for specific purposes by various organizations. Internally developed defect taxonomies can also be used to target specific defects commonly found within the organization. When creating a new defect taxonomy or customizing one that is available, it is important to first define the goals or objectives of the taxonomy. For example, the goal might be to identify user interface issues that have been discovered in production systems or to identify issues related to the handling of input fields.

To create a taxonomy:

  1. Create a goal and define the desired level of detail
  2. Select a given taxonomy to use as a basis
  3. Define values and common defects experienced in the organization and/or from practice outside

The more detailed the taxonomy, the more time it will take to develop and maintain it, but it will result in a higher level of reproducibility in the test results. Detailed taxonomies can be redundant, but they allow a test team to divide up the testing without a loss of information or coverage.

Once the appropriate taxonomy has been selected, it can be used for creating test conditions and test cases. A risk-based taxonomy can help the testing focus on a specific risk area. Taxonomies can also be used for non-functional areas such as usability, performance, etc. Taxonomy lists are available in various publications, from IEEE, and on the Internet.

Experience-Based Techniques

Experience-based tests utilize the skill and intuition of the testers, along with their experience with similar applications or technologies. These tests are effective at finding defects but not as appropriate as other techniques to achieve specific test coverage levels or produce reusable test procedures. In cases where system documentation is poor, testing time is severely restricted or the test team has strong expertise in the system to be tested, experience-based testing may be a good alternative to more structured approaches. Experience-based testing may be inappropriate in systems requiring detailed test documentation, high-levels of repeatability or an ability to precisely assess test coverage.

When using dynamic and heuristic approaches, testers normally use experience-based tests, and testing is more reactive to events than pre-planned testing approaches. In addition execution and evaluation are concurrent tasks. Some structured approaches to experience-based tests are not entirely dynamic, i.e., the tests are not created entirely at the same time as the tester executes the test.

Note that although some ideas on coverage are presented for the techniques discussed here, experience-based techniques do not have formal coverage criteria.

Error Guessing

When using the error guessing technique, the Test Analyst uses experience to guess the potential errors that might have been made when the code was being designed and developed. When the expected errors have been identified, the Test Analyst then determines the best methods to use to uncover the resulting defects. For example, if the Test Analyst expects the software will exhibit failures when an invalid password is entered, tests will be designed to enter a variety of different values in the password field to verify if the error was indeed made and has resulted in a defect that can be seen as a failure when the tests are run.

In addition to being used as a testing technique, error guessing is also useful during risk analysis to identify potential failure modes.

Applicability

Error guessing is done primarily during integration and system testing, but can be used at any level of testing. This technique is often used with other techniques and helps to broaden the scope of the existing test cases. Error guessing can also be used effectively when testing a new release of the software to test for common mistakes and errors before starting more rigorous and scripted testing. Checklists and taxonomies may be helpful in guiding the testing.

Limitations/Difficulties

Coverage is difficult to assess and varies widely with the capability and experience of the Test Analyst. It is best used by an experienced tester who is familiar with the types of defects that are commonly introduced in the type of code being tested. Error guessing is commonly used, but is frequently not documented and so may be less reproducible than other forms of testing.

Coverage

When a taxonomy is used, coverage is determined by the appropriate data faults and types of defects. Without a taxonomy, coverage is limited by the experience and knowledge of the tester and the time available. The yield from this technique will vary based on how well the tester can target problematic areas.

Types of Defects

Typical defects are usually those defined in the particular taxonomy or “guessed” by the Test Analyst, that might not have been found in specification-based testing.

Checklist-Based Testing

When applying the checklist-based testing technique, the experienced Test Analyst uses a high-level, generalized list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified. These checklists are built based on a set of standards, experience, and other considerations. A user interface standards checklist employed as the basis for testing an application is an example of a checklist-based test.

Applicability

Checklist-based testing is used most effectively in projects with an experienced test team that is familiar with the software under test or familiar with the area covered by the checklist (e.g., to successfully apply a user interface checklist, the Test Analyst may be familiar with user interface testing but not the specific software under test). Because checklists are high-level and tend to lack the detailed steps commonly found in test cases and test procedures, the knowledge of the tester is used to fill in the gaps. By removing the detailed steps, checklists are low maintenance and can be applied to multiple similar releases. Checklists can be used for any level of testing. Checklists are also used for regression testing and smoke testing.

Limitations/Difficulties

The high-level nature of the checklists can affect the reproducibility of test results. It is possible that several testers will interpret the checklists differently and will follow different approaches to fulfil the checklist items. This may cause different results, even though the same checklist is used. This can result in wider coverage but reproducibility is sometimes sacrificed. Checklists may also result in over- confidence regarding the level of coverage that is achieved since the actual testing depends on the tester’s judgment. Checklists can be derived from more detailed test cases or lists and tend to grow over time. Maintenance is required to ensure that the checklists are covering the important aspects of the software being tested.

Coverage

The coverage is as good as the checklist but, because of the high-level nature of the checklist, the results will vary based on the Test Analyst who executes the checklist.

Types of Defects

Typical defects found with this technique include failures resulting from varying the data, the sequence of steps or the general workflow during testing. Using checklists can help keep the testing fresh as new combinations of data and processes are allowed during testing.

Exploratory Testing

Exploratory testing is characterized by the tester simultaneously learning about the product and its defects, planning the testing work to be done, designing and executing the tests, and reporting the

results. The tester dynamically adjusts test goals during execution and prepares only lightweight documentation.

Applicability

Good exploratory testing is planned, interactive, and creative. It requires little documentation about the system to be tested and is often used in situations where the documentation is not available or is not adequate for other testing techniques. Exploratory testing is often used to augment other testing and to serve as a basis for the development of additional test cases.

Limitations/Difficulties

Exploratory testing can be difficult to manage and schedule. Coverage can be sporadic and reproducibility is difficult. Using charters to designate the areas to be covered in a testing session and time-boxing to determine the time allowed for the testing is one method used to manage exploratory testing. At the end of a testing session or set of sessions, the test manager may hold a debriefing session to gather the results of the tests and determine the charters for the next sessions. Debriefing sessions are difficult to scale for large testing teams or large projects.

Another difficulty with exploratory sessions is to accurately track them in a test management system. This is sometimes done by creating test cases that are actually exploratory sessions. This allows the time allocated for the exploratory testing and the planned coverage to be tracked with the other testing efforts.

Since reproducibility may be difficult with exploratory testing, this can also cause problems when needing to recall the steps to reproduce a failure. Some organizations use the capture/playback capability of a test automation tool to record the steps taken by an exploratory tester. This provides a complete record of all activities during the exploratory session (or any experience-based testing session). Digging through the details to find the actual cause of the failure can be tedious, but at least there is a record of all the steps that were involved.

Coverage

Charters may be created to specify tasks, objectives, and deliverables. Exploratory sessions are then planned to achieve those objectives. The charter may also identify where to focus the testing effort, what is in and out of scope of the testing session, and what resources should be committed to complete the planned tests. A session may be used to focus on particular defect types and other potentially problematic areas that can be addressed without the formality of scripted testing.

Types of Defects

Typical defects found with exploratory testing are scenario-based issues that were missed during scripted functional testing, issues that fall between functional boundaries, and workflow related issues. Performance and security issues are also sometimes uncovered during exploratory testing.

Applying the Best Technique

Defect- and experience-based techniques require the application of knowledge about defects and other testing experiences to target testing in order to increase defect detection. They range from “quick tests” in which the tester has no formally pre-planned activities to perform, through pre-planned sessions to scripted sessions. They are almost always useful but have particular value in the following circumstances:

  • No specifications are available
  • There is poor documentation of the system under test
  • Insufficient time is allowed to design and create detailed tests Testers are experienced in the domain and/or the technology
  • Diversity from scripted testing is a goal to maximize test coverage
  • Operational failures are to be analyzed

Defect- and experience-based techniques are also useful when used in conjunction with specification- based techniques, as they fill the gaps in test coverage that result from systematic weaknesses in these techniques. As with the specification-based techniques, there is not one perfect technique for all situations. It is important for the Test Analyst to understand the advantages and disadvantages of each technique and to be able to select the best technique or set of techniques for the situation, considering the project type, schedule, access to information, skills of the tester and other factors that can influence the selection.