In a fast-paced technology-driven environment, to maintain and achieve quality is of high significance. But how do we quantify that quality is achieved into the terms the project, the organization or the standards required? Does raising bugs across an application suffice our need of achieving quality? Or creating test cases and executing them ensures the product has met the quality parameters? It’s much more when we talk about quantifying the quality parameters. It’s actually about assurance of quality via ‘The Testing Metrics’.This article will talk about the Key Performance Indicator(KPI’s) of Quality Achievement, ie the Testing Metrics to be established to evaluate quality.

[arrow_forms id=’4331′]

When we talk about a project, what is the end goal we desire or want for that particular project? As an organization or individual working on a project, we all thrive to ensure a timely, quality oriented and cost-effective delivery that satisfies customer needs and expectations. So in an idealistic scenario, the end results should be turned into performance indicators to ensure the results are met the way they should for a beneficial project. So let’s dig into the key metrics one should cater for measuring quality.

Defect Leakage:

This is one of the most important parameters of measuring your team quality. As the name signifies, its the number of defects found in production or UAT environment and raised by the customer or the end users. These are the bugs which were not found or logged by the QA team and have been encountered by the end users or the customer. As in important ingredient of quality control, defect leakage plays an important role and should ideally be zero. Defect leakage is computed as

Total number of bugs found in production/UAT
                           _____________________________________________________ * 100%

                     Total number of bugs logged by QA team – Total number of invalid bugs

This parameter can be computed per tester, considering the numbering of bugs logged by each QA in the application and analyzing the number of bugs leaked into production/UAT from the module the QA was assigned to test. Higher the defect leakage bigger are the issues regarding the quality of the software.

Defect Rejection Ratio:

Another aspect of mapping the key performance indicator is the defect rejection ratio. All testers should be mapped on the grounds of how many valid bugs are raised by the QA team. This is directly proportional to the QA understanding of the requirements. A high number of invalid bugs leads to the conclusion of in-ability of QA’s in understanding requirements and leading to the development team time consumption in validating issues which are invalid. Defect Rejection Ratio is computed as:
                                              Total number of invalid bugs raised

                                ____________________________________    * 100%

                                              Total number of bugs logged

This percentage should also be ideally zero. As the percentage increases, the effectiveness and efficiency of testing come under consideration. The confidence of testing and on the tester is tremendously reduced. This parameter can also be evaluated on an individual basis for each QA member in the team.

Defect Density:

This is a parameter which is taken into consideration for both development and testing team as there key performance indicator but in completely two different approaches. Before we discuss what are the approaches, let’s see what is defect density? Defect density is described as the number of bugs found in each requirement/module. Defect Density is computed as:
                                                Total  number of defects logged
                                                Total  number of modules/requirement 

From a software quality perspective, this number should be close to zero or perhaps zero, which is ideally the presence of no bugs:).How are we suppose to compute this parameter from both testing and development perspective? From a development perspective, the defect density should be less, as this would compute less number of bugs across requirements and hence more quality of the software. Similarly, on the contrary, a higher defect density means more rigorous and deep driven testing performed across requirements. Having said so, less number of bugs also does not mean less rigorous or deep driven testing is performed. We need to ensure how we use this metric in conjunction with other metrics so that they yield the required results. Ideally, we all should aim for lesser defect density for a high-quality product.

In terms of agile, defect density can be used as the total number of bugs logged across stories. This parameter can be traced across developers with the number of bugs logged across requirements across each developer. Similarly the same can be traced for QA, where the number of bugs logged by QA across requirements which were assigned to QA for testing.

Defect Severity:

This metric helps to evaluate the number of bugs logged based on severity like critical, major, medium, minor/trivial/cosmetic. The only analysis of this metric been considered for performance evaluation is to map individuals on the basis of quality bugs raised by them based on business importance. Having said so this metric is controversial in its own terms, as in logging higher number of minor/trivial bugs should not map an individual to be inefficient or logging lesser number of critical bugs. It completely depends on the application and how well coded it was. A better approach of using this metric would be in conjunction with other metrics to get a better insight into quality achievements.

Defect Severity can be calculated as:

   Total Number of Critical Bugs logged
                                          ___________________________________________ *100%

                                                        Total number of bugs logged in AUT

                                                         Total Number of Major Bugs logged
                                          ___________________________________________ *100%

                                                        Total number of bugs logged in AUT

                                                         Total Number of Medium Bugs logged
                                          ___________________________________________ *100%

                                                        Total number of bugs logged in AUT

                                      Total Number of Minor/Trivial/Cosmetics Bugs logged
                                     ________________________________________________ *100%

                                                        Total number of bugs logged in AUT

Test Case Efficiency/Test Case Effectiveness:

Apart from the defects count, test case effectiveness is also considered as one important criterion for KPI’s. This is a complex evaluation criterion.How is the test case efficiency/effectiveness measured? This criteria is usually evaluated by two ways. One is via peer review and the other is an analysis of bugs discovered. Post creation of test cases when test cases are submitted for peer review, the reviewer review the test cases. In case of any missing scenarios or test cases, the reviewer comments and the creator update test cases. This also gets tracked against the test effectiveness metrics. Similarly, sometimes we come across bugs that are logged by QA as part of ad-hoc testing or logged against in production/UAT which are popped up because of missing test cases, those are also tracked across this metrics. So basically test case efficiency/effectiveness is calculated as
                                         Total number of test cases missed
                           ________________________________________ *100%
                                        Total number of test cases written
Again lower the percentage, better the efficiency and effectiveness of the individual writing test cases.This metric helps validate beginners/fresher into the testing field from there ability on test coverage. It helps to provide them directions and guidance of improvement. Evaluating this metric could be complex and time consuming, but once part of the process, becomes a daily activity with high yielding results. This helps define quality achievement to completely another level.

Test Effort:

This metrics is usually computed across test leads/test managers. This metrics is computed in two terms, one is time variance and the other is cost variance. Usually, before the start of every project, the efforts involved in testing is computed in terms of time and cost . Similarly, post completion of the project the same time and cost is evaluated. Based on the variance in these two parameters, the test effort metrics is calculated. Test effort metrics is calculated as:
       Actual time spent in the testing cycle- Estimated time for testing
     ____________________________________________________________  *100%
                        Time estimated for closing the testing cycle
Ideally, there should be no variance percentage. But we may encounter variances even greater than 50% which again is a question mark on the estimation performed by an individual. It should be thought through and re-considered before the beginning of testing activities if such variances are noted on regular basis. If a variance of negative percentage is accounted for, an overestimation of activities can also be tracked across the KPI’s.

The second variance to be considered is the cost variance. People usually get confused with this second variance and assume it to be similar with the fist variance of time, which is not the case. Let me share an example, in a scenario where testing activity was planned to end in 3 months with 4 resources but in the actual instance, the testing activity was closed in three months time only but with 5 resources. Here the time variance remained unaffected but the cost was varied by addition of the resource. So cost variance is as important as time. Cost variance is calculated as:
     Actual cost spent in testing Cost estimated to be spent in testing cycle
  ________________________________________________________________   *100%
Cost estimated to be spent in testing cycle
Ideally, this percentage should also be zero, but any percentage with greater variances directly mean problems in the project management and estimation and should be reviewed, controlled and monitored timely to eliminate bigger variances.Also as a test effort metric both these variances should be considered hand-in-hand.
The above mentioned metrics play a major role in evaluating quality and achieving the quality goals. But along with them, we have many more metrics that can be used to detail out the performance indicators. While ensuring test metrics are established in an organization, its important to ensure what is the conclusion derived from these parameters.If post these metrics no conclusion and actionable items are drawn then the metrics used should be eliminated then and there as they are not serving the purpose they are meant to be. Another important aspect is to keep in mind, whatever metrics are rolled in the organization those become the mind-sets of the employees and all their work gets evolved around it. So evaluating the right set of metrics for the organization is crucial and worth considering. 
We can evaluate KPI’s on above mentioned parameters and can mark individual or teams into different zones on basis of release, quarter or yearly cycle.We can also lay down the path of improvement and goals for next releases/quarter based on these KPI’s. Not just ensuring quality is been carried out, should be the task of QA’s but also ensuring it is achieved to the degree defined, should be also important.
Closing the post with the below quote on KPI’s
“Not everything that can be counted counts, and not everything that counts can be counted.”

Albert Einstein


Author: SadhvI