In one of the courses we teach, we classify anything that helps you test as a test tool. That can range from a simple checklist to a sophisticated test suite with all the bells and whistles. Every tool of that spectrum has a place in our testing armory and we have made use of all of them.
However, no matter how easy the test tool is to create and implement, we encounter the same issue over and over again. The test tool is treated as an afterthought with little or no planning or long term consideration. There seems to be an attitude that the test tool can be created or implemented in a few hours and it will be eternally useful. There is always a surprise when work is wasted or repeated and effort is thrown in the garbage after a few months…
Test tool implementation needs to be treated as a project in its own right.
This applies to the entire spectrum of test tools from the simplest to the most complex. Even if you are creating a checklist for your project to receive code or install on multiple machines it may be useful for other projects or to provide to the people who are responsible for fulfilling the requirements.
- A sophisticated test tool usually has a lot more features than are known at the time of evaluation or purchase. If we have bought these features we might as well make use of them. That requires planning and consideration of the entire SDLC (Software Development Life Cycle), not just testing or bug tracking.
- The second consideration is making sure that the training has been supplied to the testers and other users to ensure proper and full use of the features. People don’t know what they don’t know. We have discovered features later that would have saved time and effort.
- The last aspect is making sure you plan the data in the test tool. That data may be test cases, automated test scripts, test data or results. No one likes recreating items that are already there. Reuse is crucial and that only comes with planning.
We cannot successfully implement a test tool without a project to back it up.
This means the full gamut of determining:
- The list of requirements for the test tool (for the entire organisation and not just one project).
- Research on the available test tools and reduction of the possible list to 2 or 3 to be evaluated in detail.
- A realistic ROI calculation understanding that some of the benefits accrue over multiple projects and time periods and not just for one project (automation and regression testing are often the major ROI drivers for some tools).
- Written and approved acceptance criteria with weightings for the test tool to be accepted.
- A formal evaluation process against standard criteria.
- Selection based on the calculated results of the evaluation.
- Training and implementation.
- Review after a year to ensure the benefits are still being realized. Without this, the tool may be used once and disappear without attaining the expected benefits.
- Continuous monitoring to ensure that the tool is up-to-date and being fully utilized.
Very few organizations embark on the acquisition of software without a plan. Why should test tools be any different? 🧐