It’s important to review your software testing strategy to ensure it’s still guiding your team on the path to success. As goals change and business models shift, so should your strategy. In this post, I want to share the different aspects that form a test strategy and how you could review and revise them over time for the best results ✅
Table of Contents – How to Review Your Software Testing Strategy
- Strategies vs. Tactics
- Who’s responsible for reviewing the testing strategy?
- A model for your test strategy
- Test strategy aspects to review frequently
- Pick testing metrics carefully
- Covering all the bases of your test strategy
A strategy is based on taking a particular direction to achieve a goal. There are tactics that detail how to put the strategy into play, what tools to use, and how to use them. A strategy has a more long-term vision, while tactics focus on the short term and on specific actions.
Some issues teams face when developing strategies and tactics include:
- Not spending enough time on strategic planning, often due to the misconception that it’s only needed within larger companies.
- Teams bypass strategic planning and go straight to brainstorming tactics because. This may happen because something worked for a competitor, there’s a trending tactic they want to jump on, among others.
It’s essential to take time on a regular basis, not only to design the test strategy but to review it. Analyze what the current context is, see the progress it’s making, where the team stands, and where it can aspire to go. Then you may plan to achieve the goal.
Although most of the world refers to software testing as software quality assurance, it’s important to understand that there is no such thing as “assuring software quality.” A tester’s role is to provide information about the quality and the risks of a product, for someone who will make a decision about the product.
No software is perfect, but we may maximize our efforts to ensure we get as much information about the product’s quality as possible, focusing on the areas that carry the most potential for business risk. Quality is a shared responsibility, it’s not the tester who’s responsible for quality but testers are responsible for testing.
This doesn’t mean testers are the ones who should execute all the testing activities, because unit testing is a typical task for developers. Rather, testers should provide information about the quality of the product to the decision-makers such as the project manager, the team, or the CEO. Responsible testers must report on different aspects of quality, consider different aspects, and include the coverage of unit tests because that allows them to make better decisions about what to automate or what to test on other levels, taking a more transversal view of risks and their management.
So, what should testing focus on when reporting about the quality of the product?
- Disclosing the current risks and problems.
- Reporting what is being tested and how well or poorly it’s being done when considering the context and any constraints like a lack of time.
- Reporting what is not being tested
I’ve created an example of a model with aspects having a major impact on quality based on my experience helping to execute 300 client testing projects:
The most important aspect to pay attention to is the timeline from when a new idea or requirement is introduced and when it ends up in the hands of the user. In continuous delivery, this is called “lead time”. The main goal is to look for how to improve this metric because it is the one that matters most to the business. Then, look for any waste, whether it be of time or quality, what tools could be incorporated, what process adjustments, and more.
While none of these things are new, here are some aspects of the model that should be reviewed due to their importance, which is often neglected.
Know your company, project, and/or product as well as its planning, objectives, or goals, and have a shared vision of where it’s going. Does everyone have a shared understanding of current problems, risks, and what the main concerns are while knowing your quality criteria, requirements, regulations, standards to meet?
Vitally, do you know how clients and your teammates feel about the product quality?
Another foundation of your test strategy is to know how the team is made up, what its interactions are like, how the relationship is, and how close its members are to one another. Does everyone know each others’ roles and differing skill sets? What is everyone’s participation like in the different development phases?
Shift-left testing means integrating testing activities with development by beginning earlier in the development cycle, instead of the later stages.
Shift-left testing looks a little bit like this:
- Start with testing before coding and think about how you will test a feature before a developer starts to code it.
- Planning, creating and automating test cases at different levels.
- Gathering, prioritizing, and processing feedback as early as possible.
To evaluate this, ask yourself the following questions:
- How early are we involving testing?
- Are we utilizing our test talent in all the potential areas they could provide value?
- Is there traceability between testing, development, and requirements?
While shift-left testing is becoming more common, shift-right testing is just as important. Shift-right is testing in production. This means gathering information about the quality of the product in production. It doesn’t necessarily mean that you will execute tests in production, but it could also include that. We could also gather information by reviewing logs, traces, monitoring tools, and so on.
To review this, ask yourself this question: “How much production information is available to improve the process?” For example, review bugs encountered by the users, logs, monitoring, and analysis of how often users engage with the different parts of the system.
Don’t discern between “manual testing” and “automated testing.” Instead, define a software testing strategy that has a more holistic vision of testing, thinking about what to execute “manually” and how to rely on automation in the different layers. It’s also important to review how to best apply testing heuristics. A fresh pair of eyes always helps to discover bias or aspects that we assume are acceptable.
Review software quality attributes such as performance efficiency, maintainability, compatibility, usability, accessibility, and how they’re being tested. Prioritize which attributes are most important to end-users and stakeholders.
Environments and Platforms
Review the use of test environments and analyze if everyone knows the objective of each one and how to improve them. It’s also important to analyze the platforms from which users are accessing the product using Google Analytics or by reviewing logs and ensure that testing is done as similarly as possible. You may use various online platforms such as Sauce Labs or BrowserStack to achieve this.
Provides visibility into code management, branching strategy, and more. This is useful to align risks that exist in this management.
Some things to consider:
- Are we doing pair reviews or pair programming? Would that be useful for the team?
- Are static code verification tools used, such as SonarQube? Review the strategy for managing technical debt.
Continuous Integration/Continuous Delivery (CI/CD)
Giving visibility to the strategy regarding CI/CD is essential. Pipelines need maintenance as any software does. Over time, technology gets outdated or inefficient so we need to review the whole process, not only fix the specific parts.
Also, reviewing the CI/CD strategy helps to be sure that everyone is aligned, understands what it is for so everyone contributes to how to improve the process. Adding automatic validations to the different stages of the pipeline is crucial for an efficient and reliable delivery process.
A part of a good testing strategy is to use the right metrics and KPIs to your advantage. Testing looks to provide more information to have less uncertainty and better control over risks, but you must analyze that information carefully.
As Simon Sinek famously said, “Start with why.” Why do we want to measure something? Pick metrics tied to things that may improve business performance. Vanity metrics, like the number of bugs reported, shouldn’t be the main focus, because they may cause your team to spend time on less critical areas.
Some testing metrics to consider include those relating to user satisfaction, the development process, cycle and lead time, test coverage, code quality, the severity of incidents, exploratory testing, test automation reliability, performance metrics including response times and resource usage, and team happiness.
What is the difference between a test strategy and a test plan?
Test strategy: A test strategy is a set of guidelines that explains test design and establishes how testing must be done.
Test plan: A test plan is a document that defines the scope, objective, approach, and value of a software testing effort.
What is the purpose of the software testing strategy?
The purpose of software testing strategy is to provide a rational deduction from organizational, high-level objectives to actual test activities to meet those objectives from a software quality assurance perspective.
What are the important testing strategies in software engineering?
Here are important strategies in software engineering:
Unit Testing: The programmer follows this software testing basic approach to test the single component of the program. It helps developers to know whether the individual component of the code is working properly or not.
Integration testing: It focuses on the interface between the modules of the software applications. You must see if the integrated units are working without errors or not.
System testing: In this method, compiles and tests your software as a whole. This testing strategy checks the functionality, security, and portability, amongst others.
What are the factors involved in the selection of test strategy?
The testing strategy selection may depend on these factors:
- Is the test strategy a short-term or long-term one?
- Organization type and size.
- Project requirements.
- Product development model.
So far, this model has touched upon reviewing what we know we must review, but it’s just as important to contemplate “what are the unknown unknowns?” This is when critical thinking is essential. 💡
Do you have a habit of reviewing these aspects periodically? Any ideas on other aspects you think should be added to the model? Leave a comment!