Software Testing Techniques

The software testing techniques have an important role in the objectives of testing, as each of them has a different role to play and help to find out different errors. These different methods help to unravel a new error from the previously-used ones.
Techspirited Staff
Software testing or debugging is a process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine, that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. ~ Foundation of Software Testing by Dorothy Graham, Erik van Veenendal, Isabel Evans, Rex Black.
An Introduction
Software testing forms the fundamental component of software quality assurance. While a program is being tested, it is laid into pieces to find out any errors in it. When it is tested, the test engineer aims to find errors in the system, to find an undiscovered error. To find new errors, different methods are used. The purpose of these methods is to uncover new errors. In this article, we will see these different methods.
Strategies
Normally, debugging is carried out in all stages of the software development life cycle. The advantage of this is that it helps to find different defects in different stages of development. This helps to minimize the cost, as it is easier to log the defects and fix the defects in the early stages. When the entire product is ready, the cost increases, as there are a number of other components, which are also dependent on the component having defects in it. The methodologies are broadly divided into two, namely static and dynamic.
Static
In this type, the process is carried out without execution of the program. There is static analysis of the code, which is carried out. There are different types within this as well.
Review
Review is said to be one powerful static method, which is carried out in the early stages. The reviews can either be formal or informal. Inspection is the most documented and formal review method. However, in practice, the informal review is perhaps the most commonly-used strategy.
In the initial stages of the development, the number of people attending the reviews, whether formal or informal are less, but they increase in the later stages. Peer Review is a review of a product undertaken by the peers and colleagues of the author of the component, to identify the defects in the component and also to recommend any improvements in the system if required. The types of reviews are:
  • Walkthrough: The author of the document to be reviewed guides the participants through the document, along with his/her thought process to come to a common understanding as well as to gather feedback on the component document under review.
  • Technical Review: It is a peer group discussion, where the focus of the discussion is to achieve consensus on the technical approach taken, while developing the system.
  • Inspection: This is also a type of peer review, where the focus is on the visual examination of various documents to detect any defects in the system. This type of review is always based on a documented procedure.
Static Analysis by Tools
Static analysis tools focus on the code. These tools are used by developers before as well as sometimes during component and integration testing. The tools used include:
  • Coding Standards: Here, there is a check conducted to verify adherence to coding standards.
  • Code Metrics: The code metrics helps to measure structural attributes of the code. When the system becomes increasingly complex, it helps to decide the design alternatives, more so while redesigning portions of the code.
  • Code Structure: Three main aspects of the code structure are control flow structure, data flow structure, and data structure.
Dynamic
In this type, the code is actually tested for defects. It is further divided into three sub-categories, namely, specification-based, structure-based, and experience-based methods.
Specification-Based
The procedure used to derive and/or select test cases based on the analysis of either functional specifications or non functional specifications of a component or system, without any reference to the internal structure of the component or system. It is also known as 'black box' or 'input/output driven testing'. They are so-called, as the tester has no knowledge of how the system is structured inside. The tester concentrates on what the program does and is not bothered about how it does it. Functional debugging concentrates on what the system does, along with its features or functions. On the other hand, the non functional type concentrates on how well the system does something. There are five main specification-based methods:
  1. Equivalence Partitioning: The test cases are designed to execute representative inputs from an equivalence partition or equivalence classes. The test cases are designed such that the test cases cover every partition at least once. To explain it further, the idea is to divide - a set of test conditions into sub groups or sets, which can be considered the same. If any value from the group is used in the system, the result should be the same. This helps to reduce the execution of a number of test cases, as only one condition from each partition can be tested. Example: If 1 to 100 are the valid values, then the valid partitioning is 1 to 50, 50 to 100. Therefore, for valid partitioning, 1, 50, and 100 are the values for which the system will have to be checked. But it does not end there, the system will have to be checked also for invalid partitions as well. Hence, random values, like 10, 125, etc., are invalid partitions. While choosing the values for invalid partitioning, the values should be away from the valid boundaries.
  2. Boundary Value Analysis: An input or output value that is on the edge of an equivalence partition or is at the smallest incremental distance on either side of an edge. This is based on checking the boundaries between the partitions for both valid boundaries and invalid boundaries. Example: If 1 to 99 are the valid inputs. Therefore, values 0 and 100 are the invalid values. Hence, the test cases should be so designed to include values 0, 1, 99, and 100, to know the working of the system.
  3. Decision Table: This focuses on business logic or business rules. A decision table is also known as cause effect table. In this table, there is a combination of inputs with their associated outputs, which are used to design test cases. This works well in conjunction with equivalence partitioning. Here, the first task is to identify a suitable function, which has behavioral traits, that react according to a combination of inputs. If there are a large number of conditions, then dividing them into subsets helps to come up with the accurate results. If there are two conditions, then you will have 4 combination of input sets. Likewise, for 3 conditions there are 8 combination sets, and for 4 conditions there are 16 combination sets, etc.
  4. State Transition: This is used where any aspect of the component or system can be described as a 'finite state machine'. The test cases are designed to execute valid and invalid state transition. In any given state, one event can give rise to only one action, but the same event from another state may cause a different action and a different end state.
  5. Use Case: It helps to identify the test cases, which exercise the whole system on a transaction by transaction basis from the beginning to end. The test cases are designed to execute real life scenarios. They help to unravel integration defects.
Structure Based
There are two purposes of this type, viz., test coverage measurement and structural test case design. They are a good way to generate additional test cases, which are different from existing test cases, derived from the specification-based procedures. This is also known as white box strategy.
  • Test Coverage: The degree expressed as a percentage, to which a specified coverage item has been exercised by a test suite. The basic coverage measure is

    Coverage = Number of Coverage items exercised
    ______________________________________
    *100%
    Total number of coverage items

    There is a danger in using the coverage measure. Contrary to the belief, 100% coverage does not mean that the code is 100% tested.
  • Statement Coverage and Statement Testing: This is the percentage of executable statements, which have been exercised by a particular test suite. It is important to note, that a statement may be on one single line or it can be spread over several lines. At the same time, one line may contain more than one statement or a part of a statement as well and not to forget statements, which contain other statement inside them. The formula to be used for statement coverage is:
    Statement Coverage = Number of statements exercised
    __________________________________
    *100%
    Total number of statements
  • Decision Coverage and Decision Testing: Decision statements are statements, like 'If', 'loop', or 'Case' statements, etc., where there are two or more possible outcomes from the same statement. To calculate decision coverage, the formula you will use is
    Decision Coverage = Number of decision outcomes exercised
    _________________________________________
    *100%
    Total number of decision outcomes

    Decision coverage is stronger than statement coverage, as 100% decision coverage always guarantees statement coverage, but the other way round is not true. While checking decision coverage, each decision needs to have both a true as well as a false outcome.
  • Other Structure-based Processes: They include linear code sequence and jump (LCSAJ) coverage, multiple condition decision coverage (MCDC), path testing, etc.
Experience-Based
Although debugging needs to be rigorous, systematic and thorough, there are some non systematic methods based on a person's knowledge, experience, imagination, and intuition. A bug hunter often is able to locate an elusive defect in the system. The two methods under this category are:
Error Guessing
Here, the experience of the tester is put to test, to hunt for elusive bugs, which may be a part of the component or the system due to errors made. It is often used after the formal strategies have been used to test the code and has proved to be very useful. A structured approach to be used in error guessing approach is to list the possible defects, which can be a part of the system and then test cases in an attempt to reproduce them.
Exploratory Strategies
It is also known as 'monkey testing'. It is a hands-on approach, where there is minimum planning, but maximum execution. The test design and test execution happen simultaneously without formally documenting the test conditions, test cases, or test scripts. This approach is useful when the project specifications are poor or when the time at hand is extremely limited.
There are different types of software testing estimation techniques. One of them involves consulting people who will perform the activities and the people who have expertise on the tasks to be done. The procedures to be used to test the project depends on a number of factors. The main factors are urgency of the project, severity of the project, resources in hand, etc. At the same time, not all strategies will be utilized in all the projects; depending on the organizational polices, they will be decided.