Software testing is not just about finding bugs, it’s about investigating, analyzing and ensuring that your delivery will be of quality, and in every way possible (quality of code developed, requirements agreed, adopted patterns and others).
With the growing search for ever-faster delivery, it is appropriate to say that there is no point in delivering something that is at odds with what has been agreed with the stakeholders. That way, knowing and adopting techniques to test the software makes all the difference in validating the delivery of the product.
Thus, we will discuss in this article the best practices for using some white box and black box testing techniques.
In this post, we will discuss:
- After All, What is the Black Box Test?
- What is White Box Testing?
It has this name because the source code is ignored in the test. Thus, using this technique, the tester is not concerned with the constituent elements of the software, but how it works.
In this sense, this type of testing is also known as functional testing, as it seeks to ensure that the functional requirements of the product are consistent. Therefore, it aims to validate the inputs and outputs of the system.
From this, it is commonly performed using the user experience, i.e. through the product interface.
Therefore, to increase the quality and consequently shield the software from failure, we understand that all possible inputs/outputs need to be tested. However, we know that in most cases this is humanly impossible.
Besides, the lack of clarity of requirements may (and will) impact accepted inputs and outputs for the test.
This means that besides the volume of data that we will have to validate, they may not be adopted in the tests. For example, imagine that you only use numbers to test an id field, but the developer uses the field as a string (alphanumeric characters). Did you realize that the data used in your test may not be accepted because it disregards a large amount of data that may (but should not) be accepted? 🤔
Keeping this in mind, some practices aim to amplify the effectiveness of this technique. Thus, we will see three, namely: equivalence partitioning, limit value analysis, and decision table.
Imagine you have a field that only receives even numbers. Is it necessary to validate that such a field does not receive all odd numbers?
According to the equivalence partition technique, no. This technique states that being the corresponding result for different inputs, it is enough to sort them into sets and test only one data of each.
In the case of the scenario we formulated, the number 5 and number 11 have the same output (they are odd), so just test with one of them. So, instead of 2 tests with equivalent results, we will have one contemplating the same output.
This technique suggests that only values that are within the allowable range be used.
Thus, if you want to validate, for example, that for a given operation the user must be over the age of eighteen, the best values for the test are 17, 18 and 19, as they are within the limit of the minimum allowed value (18 ).
Assuming that you will test functionality that has some conditions. Now how do you know if all cases are showing the expected outputs?
For convenience, imagine testing a warehouse removal:
|Employee has permission?||Not||Yes||Yes|
|Product has available balance?||X||X||Not|
|Expected||Employee is not allowed||Invalid Product||Product without balance|
This is where the decision table comes in, as it is based on verifying the expected result for the sets formed by combining these parameters. Thus, for our example, we are sure that at least 3 of the possible combinations are covered by 3 tests.
It has this name because the tester has access to the internal structure of the application. Therefore, its focus is to ensure that the software components are concise.
In this sense, this type of test is also known as a structural test or glass case, as it seeks to guarantee the quality of the system implementation. Therefore, it aims to validate only the logic of the product.
From this, it is commonly performed using the source code. Therefore, they require more technical knowledge on the part of the tester, not to mention the higher cost, since, because it is implementation-based when changing it, the test should also do so.
Keeping this in mind, some practices aim to amplify the effectiveness of this technique. Thus, we will see two, namely: condition test and cycle test.
This technique is simple because its purpose is to evaluate if the operators / logical variables (boolean – true/false) are consistent.
Who has never needed to test a repeating structure (for / while)? Quite simply, this is precisely what this technique does: it validates repetition structures.
For this, it divides the cycles into 4 types: unstructured, simple, nested and concatenated.
- The unstructured cycle is nothing more than the set of repeating blocks used in a disorderly manner. Because of this, when identified, it must be restructured as the cost of testing and maintaining the system considerably increases.
- The simple cycle, as its name implies, is just a repetition structure being tested.
- Nested cycles are cycles within cycles.
- And last but not least, concatenated cycles are dependent repetition structures, that is, to test block 2, I need to ensure that block 1 is coherent.
Whether it is a black box test (functional), or a white box test (glass or structural), the important thing is to ensure that the development has the highest quality possible. So hopefully these techniques, if you didn’t know or apply them before, have piqued your interest for testing and software quality.
Happy Testing! 😉