To manage your test cases, bugs, test plans, the creation of test reports etc tool support is necessary as it helps in easy maintainability and access of your test artefacts. Today we have multiple tools available in the market to help ease out our work. Since quality assurance revolves around a lot of documentation, having the right set of tools fulfilling your organization requirements is indeed crucial and critical for the software testing industry. A wrong selection of tool can lead to chaos among team members, loss of productivity and efficiency. The main aim of having the right tool is to have better effectiveness and efficiency in an individual’s work.
So let’s start off, with “How to evaluate the right tool for your organization for the right purpose?”
First and foremost as a member representing your QA team, you should be crystal clear about your team requirements and their daily ongoing challenges. If not, the first approach towards selection is knowing what are the needs or challenges faced by your team. You can opt out for surveys/excel’s that highlight and focus on their pain areas and ask the team to fill it out. This is one of the building blocks to your tool evaluation pre-requisites. For example, let’s say a QA highlights on the facts of unable to create the test suite from a bunch of test cases(presently the mode of the test case creation used was excel), another QA mention about the issue of reusability of test cases and so on.
Highlighting few of the issues:
- The requirement of test step and test suite creation
- The manual intervention of writing execution status across test cases.
- Auto-linking of bugs with test cases
- Auto-linking of stories with test cases
- The requirement of a systematic defect cycle
- Dashboard creation
- Reusability of test cases
- Version controlling
- Auto creation of reports based on available stats.
- Email send to stakeholders via tool on execution update
Above mentioned are the issues provided by the QA team but another key aspects in terms of requirements to be considered is from an organizational perspective. An organization is built upon a predefined process path, operational availability and financial budgeting across departments. These also need to be considered while addressing the evaluation of a test tool.
Highlighting few of the organizational perspectives:
- Budget allocation for the tool(either it could be an open source due to budget constraints or tool as per allocated budget)
- The tool selected based on methodology followed across projects in an organization like Agile, waterfall or maybe none required.
- Installation pre-requisites(this is something which may or may not be required from an organization perspective. As few CMMI level organizations have an equipped system administration and approval hierarchy available who have clear bindings to what architecture/languages they may or may not like to support.)
- Involved user base and role hierarchy
- Privacy and user permissions
Post we get a clarity on the requirements from the sources, we can build a matrix and start analyzing tools on the same ground. All these requirements become pre-requisites for the tool selection process. Having said so not, necessarily all tools would fulfil the requirement, the catch is any tool close to the requirement set would become the tool as per our requirement.
Below is the snapshot of one of the tool analysis matrix performed:
The highlighted tools were taken into consideration based on organizational perspective and were analyzed based on the requirement set mentioned as per the QA team in the leftmost column. The tool considered post analysis was ‘Testlink’ due to it fulfilling the maximum requirements.
Now let’s dig into the process of rolling out the selected tool.
Post the tool is selected, piloting of the tool is done. Rather than releasing the tool to the whole QA department, only a subset of people is given the tool to experiment and to use it in their daily activities. They are provided training on the tool flow. Post a one week or a two-week trial, their experiences with pros and cons are noted and a further evaluation is done. If post evaluation, it is concluded that the tool seems to achieve the purpose, it is planned for a major rollout. If the tool does not have accomplished the purpose, then the above tool evaluation matrix activity is again performed.
Once the tool is planned for a major rollout, a proper training program is prepared to train the team on the tool and the best practices to be used across it. Post training, the tool is roll-out in the organization.
An important and often forgotten activity post tool roll-out is the metrics to be collected across the tool usage. It’s important to measure how the tool has brought about the change in the organization and collect metrics across them. It’s important to ensure a timely audit runs across the tool used to assure best practices are used and the required results are retrieved
Enclosing the lines with the below relevant quotes
“Just like with everything else, tools won’t give you good results unless you know how, when, and why to apply them. If you go out and you buy the most expensive frying pan on the market it’s still not going to make you a good chef.”
Author: Sadhvi Singh
About Author: http://www.theqavibes.com/p/about-author.html