It’s important to periodically review your software testing strategy to make sure that it’s still guiding your team on the path to success. As goals change, business models shift, and so on, so should your strategy (which is not to be confused with tactics, more on that below)! In this post, I want to share the different aspects that together form a test strategy and how you could review and revise them overtime for the best results ✅
Table of Contents – How to Review Your Software Testing Strategy
- Strategies vs. Tactics
- Who’s responsible for reviewing the testing strategy?
- A model for your test strategy
- Test strategy aspects to review frequently
- Pick testing metrics carefully
- Covering all the bases of your test strategy
First of all, a strategy is typically based on taking a particular direction to achieve a goal. Then, there are tactics which are the details of how to put the strategy into play, what tools to use, and how. A strategy has a more long-term vision, while tactics focus on the short term and on specific actions.
Some typical issues teams face when it comes to strategies and tactics include:
- Not enough time spent on strategic planning, often due to the misconception that it’s only needed within larger companies.
- Teams bypass strategic planning and go straight to brainstorming tactics because, for example, something worked for a competitor, there’s a trending tactic they want to jump on, etc.
It’s essential to take time on a regular basis, not only to design the test strategy, but to review it. Analyze what the current context is, see what progress has been made, where the team stands, and where it can aspire to go, and from there, define the next steps to get closer to that horizon.
Although most of the world still refers to software testing as software quality assurance, it’s important to first understand that there is no such thing as “assuring software quality.” A tester’s role is to provide information about the quality and the risks of a product, for someone else who is going to do something with that information—someone who is going to make a decision.
No software will ever be perfect, but we can maximize our efforts to make sure that we get as much information about the product quality as possible, focusing on the areas that carry the most potential for business risk. Quality should be seen as a shared responsibility, it’s not the tester who’s responsible for quality but, as I see it, testers should indeed be responsible for testing.
This doesn’t necessarily mean testers are the ones who should execute all the testing activities, since, for example, unit testing is a typical task for developers. Rather, testers should be able to provide information about the quality of the product to the decision-makers (the PM, the team, the CEO, whomever). Responsible testers have to report on different aspects of quality, considering different aspects, including the coverage of unit tests because that will allow them to make better decisions about what to automate or what to test on other levels, taking a more transversal view of risks and the management thereof.
So, what should testing focus on when reporting about the quality of the product?
- Disclosing the current risks and problems
- Reporting what is being tested and how well or poorly it’s being done (considering the context and any constraints like a lack of time)
- Reporting what is not being tested
I’ve created an example of a model with all the aspects that I’ve experienced having a major impact on quality (from my experience helping to execute 300 client testing projects):
The most important aspect to pay attention to is the timeline from when a new idea or requirement is introduced and when it ends up in the hands of the user. In continuous delivery, this is called “lead time”. The main goal is to look for how to improve this metric (which is the one that matters most to the business). Then, look for any waste, whether it be of time or quality, what tools could be incorporated, what process adjustments, etc.
While none of these things are new, here are some aspects of the model that should be reviewed due to their importance, which are often neglected.
It might be obvious, but it’s worth stating… Know your company, project and/or product as well as its planning and objectives or goals. Have a shared vision of where it’s going. Does everyone have a shared understanding of current problems, risks, and what the main concerns are? Also know your quality criteria, requirements, regulations, standards to meet.
Fundamentally important, do you know how clients and your own teammates feel about the product quality?
Another foundational part of your test strategy to have clear is to know how the team is made up, what its interactions are like, how the relationship is and how close its members are to one another. Does everyone know each others’ roles and differing skill sets? What is everyone’s participation like in the different development phases?
Shift-left testing means integrating the testing activities with development, beginning earlier in the development cycle, rather than in the later stages, as in traditional software development models such as Waterfall.
Shift-left testing looks a little bit like this:
- Start with testing before coding, thinking about how are we going to test a feature before a developer starts to code it
- Planning, creating, and automating test cases at different levels
- Gathering, prioritizing, and processing feedback as early as possible
To evaluate this, you can ask yourself, how early are we involving testing? Are we utilizing our test talent in all the potential areas they could provide value? Is there traceability between testing, development, and requirements?
While shift-left testing is becoming more common, shift-right testing is also equally important. Shift-right implies testing in production, meaning, gathering information about the quality of the product, in production. It doesn’t necessarily mean that we will execute tests in production, but it could also include that. We could also gather information by reviewing logs, traces, monitoring tools and so on.
To review this, you can ask yourself, “How much production information is available to improve the process?” For example, review the bugs encountered by the users, logs, monitoring, analysis of how much users engage with the different parts of the system, etc.
Here, don’t discern between “manual testing” and “automated testing”, but rather define a software testing strategy that has a more holistic vision of testing, thinking about what to execute “manually” and how to rely on automation in the different layers. It’s also important to review how to best apply testing heuristics. A fresh pair of eyes always helps to discover bias or aspects that we learned to assume are ok.
Review the software quality attributes such as performance efficiency, maintainability, compatibility, usability, accessibility, etc. and how they’re being tested. Prioritize which attributes are most important to the end-users and all stakeholders.
Review the use of test environments. Analyze if everyone knows the objective of each one and if there is something to improve. It’s also important to analyze the platforms from which users are accessing the product (using Google Analytics or by reviewing logs) and ensure that testing is done as similarly as possible. For this, various online platforms such as Sauce Labs or BrowserStack can be used.
Provide visibility into code management, branching strategy, etc. This is useful to be aligned regarding the risks that exist in this management.
Some things to consider:
- Are we doing pair-reviews or pair programming? Would that be useful for the team?
- Are static code verification tools used, such as SonarQube? Review the strategy for managing technical debt.
Giving visibility to the strategy regarding CI/CD is essential. Pipelines need maintenance as any software does; over time, things get outdated or inefficient so we need to review the whole process, not only fix the specific parts.
Also, reviewing the CI/CD strategy helps to be sure that everyone is aligned, understands what it is for and so that everyone contributes to how to improve the process. Adding automatic validations to the different stages of the pipeline is crucial for an efficient and reliable delivery process.
A part of a good testing strategy is to use the right metrics and KPIs to your advantage. Testing always looks to provide more information in order to have less uncertainty and better control over risks, but that information has to be analyzed carefully.
As Simon Sinek famously said, “Start with why.” Let’s start thinking about the reason, the purpose. Why do we want to measure something? Pick metrics that are actually tied to things that can improve business performance. Vanity metrics like the number of bugs reported, shouldn’t be the main focus, because they may cause your team to spend time on less critical areas.
Some testing metrics to consider include those relating to user satisfaction, the development process, cycle and lead time, test coverage, code quality, severity of incidents, exploratory testing, test automation reliability, performance metrics (response times and resource usage), and team happiness.
So far, this model has touched upon reviewing what we know we have to review, but just as important to contemplate is: what are the unknown unknowns? This is when the ability to use critical thinking is essential 💡
I’m curious… Do you have a habit of reviewing these aspects periodically? Any ideas on other aspects you think should be added to the model? Leave a comment!