Welcome to Zen-MTO

Testing is an essential software development activity through which software/system quality is ensured. In simplistic terms, testing is intended to validate whether the software system behaves as intended while executing it.  Testing it widely used in industry for quality assurance, and it can consume 50% or even more of the development costs. Therefore, it is essential to improve the effectiveness of testing. 

Search-based Software Testing (SBST) has been increasingly applied to solve a variety of testing problems such as functional testing, integration testing, regression testing, stress testing. The foundation of employing SBST is to formulate a testing problem into a mathematical optimization problem, which can be efficiently solved with various meta-heuristic optimization algorithms (e.g., Genetic Algorithm). A large number of test optimization problems are multi-objective by nature, i.e., there exists a set of conflicting objectives that are required to be taken into account when finding optimal solutions. With this in mind and driven by the needs of our industrial collaborations, we developed a framework for multi-objective test optimization (Zen-MTO).

Regression Test Optimization

Regression testing is a testing activity that aims to ensure that new faults are not introduced while modifying the software. Regression is frequently applied while developing modern software systems in the industry. However, regression testing is an expensive testing process, and it can consume up to 80% of the overall testing budgets. There are different approaches to improve regression testing such as test case prioritization, test case minimization, and test case selection that convert the testing problem into a multi-objective test optimization problem. Test case prioritization is one of the most widely used approaches to improve regression testing with the aim to prioritize a set of test cases into an optimal order for achieving certain criteria (e.g., fault detection capability) as early as possible. Since it is unknown whether a test case can detect a fault before executing it, most techniques for TP use structural coverage (e.g., code coverage) as one of the important goals for test case prioritization.

Through our collaboration with cisco, we noticed that a regression test case there is typically composed of these parts: 1) setting up test configurations of a set of VCSs under test; 2) invoking a set of test APIs of the VCSs; and 3) checking the statuses of the VCSs after invoking the test APIs to determine the success or failure of an execution of the test case. When executing test cases, several objectives need to be achieved, e.g., covering the maximum number of configurations. However, given a number of available test cases, it is often infeasible to execute all of them in practice due to a limited budget of execution time (e.g., 10 hours), and it is therefore important to prioritize the test cases. 

In this regard, we prioritized a set of test cases [1] considering four objectives: higher configuration coverage, higher test API coverage, higher status coverage, and higher fault detection capability. More specifically, we presented a search-based test case prioritization approach called Search-based Test case prioritization based on Incremental unique coverage and Position Impact (STIPI). It includes two prioritization strategies when defining the fitness function in STIPI: 1) Incremental Unique Coverage, i.e., for a specific test case, we only consider the incremental unique elements (e.g., test APIs) covered by the test case as compared with the elements covered by the already prioritized test cases; and 2) Position Impact, i.e., more importance is given to test cases with higher positions (i.e., to be executed earlier) when assessing the quality of a prioritization solution, as we aim at achieving high criteria (i.e., high coverage of configurations, test APIs, statuses and high fault detection capability) as fast as possible.

STIPI is implemented on top of jMetal version 4.5 [2] by defining a new problem. The problem implementation (STIPI) can be downloaded here. The detailed description of the implementation is presented in [1].


Existing works have shown that multi-objective search algorithms (e.g. non-dominated sorting genetic algorithm II (NSGA-II) [3]) are effective in solving multi-objective test optimization problems. More specifically, multi-objective search algorithms usually produce a set of non-dominated solutions, i.e., solutions with equivalent quality.

However, based on our experience of applying SBST for addressing several multi-objective test optimization problems, we observed that most of the current multi-objective search algorithms hold certain randomness when selecting parent solutions to produce offspring solutions due to selection mechanisms employed in the algorithms. For example, in binary tournament selection (commonly used in the literature) two solutions are randomly selected and the better solution is selected as the parent. However, if the selected parent solutions are suboptimal, it might result in the offspring solutions with bad quality, which may further degrade the overall quality of the solutions in the next generation. In a worst case, such randomness that exists for parent selection may hamper algorithms towards finding optimal solutions in a limited number of generations.

We argue that introducing an elitism strategy when selecting parent solutions to produce offspring solutions can largely reduce such randomness. With this goal in mind, we propose a cluster-based genetic algorithm with elitist selection (CBGA-ES)  [3] for supporting multi-objective test optimization. The core idea of CBGA-ES lies on 1) dividing the population into different clusters for grouping solutions with a similar quality, and 2) defining cluster dominance strategy to determine the best cluster and only choosing the solutions from the best cluster for producing offspring solutions. When a new population is created, the process will be repeated for producing the next generation until the termination conditions for the algorithm are met. CBGA-ES is evaluated with three regresssion test optimization problems: 1) test case prioritization [1], 2) test case selection [4], and 3) test case minimization [5].

CBGA-ES is implemented using jMetal version 4.5 [2]. The algorithm can be downloaded here, and then it can be integrated with jMetal. The algorithm is described in detail at [3].

Related Publications

  1. Pradhan, D., Wang, S., Ali, S., Yue, T., & Liaaen, M. (2016, October). STIPI: Using Search to Prioritize Test Cases Based on Multi-objectives Derived from Industrial Practice. In IFIP International Conference on Testing Software and Systems (pp. 172-190). Springer International Publishing, 2016.
  2. Durillo, J.J., Nebro, A.J.: jMetal: A Java framework for multi-objective optimization. Advances in Engineering Software 42, 760-771 (2011).
  3. Pradhan, D., Wang, S., Ali, S., Yue, T., & Liaaen, M. (2017, March). CBGA-ES: A Cluster-Based Genetic Algorithm with Elitist Selection for Supporting Multi-Objective Test Optimization. In Software Testing, Verification and Validation (ICST), 2017 IEEE International Conference on (pp. 367-378). IEEE, 2017.
  4. D. Pradhan, S. Wang, S. Ali, and T. Yue, "Search-Based Cost- Effective Test Case Selection within a Time Budget: An Empirical Study," in Proceedings of the 2016 on Genetic and Evolutionary Computation Conference, pp. 1085-1092, 2016.
  5. S. Wang, S. Ali, and A. Gotlieb, "Cost-effective test suite minimization in product lines using search techniques," Journal of Systems and Software, vol. 103, pp. 370-391, 2015.

Contact Person