Are you testing too much, or are you just writing too much code? 🤔
Over the past few years of coding, I’ve fluctuated between not testing at all to testing too much. In part, it’s because the principles behind testing are not exactly something that’s taught in a formal setting. Nor is there some sort of consensus on how it’s done.
Everyone has their own opinions when it comes to testing. Testing methodologies can also vary, depending on the scope and final purpose.
When it comes to testing your code, too many tests can slow down your progress and gridlock you into test units. Not enough testing can lead to flaky code.
So what’s the right amount? And how do you go about implementing it? Rather than going into the usual talks about test-driven design and all that stuff, let’s take a different approach towards testing your code.
Finding the Balance in Testing Your Code
- Start with the end in mind
- Your application in a nutshell
- Are you testing too much?
- How do you know if you’re writing too much code?
- Final Thoughts
A lot of developers tend to just start coding. When it comes to following tutorials or coding up your own projects, many of us use a play it by ear kind of approach.
While there is nothing wrong with this, for larger and complex applications and systems, this can be a design process flaw. The thing with writing test units is that you have to understand how things are going to end up.
The act of writing the code becomes a process of coloring in the pieces with logical indicators, data processing, and information transformation. This is why you need to know what your end goal is. Without it, your test’s value runs on a system of diminishing returns.
So how do you determine what your end goal looks? What happens when you’re working in an agile environment where things are constantly changing?
You start with the big picture.
This is your application in a nutshell:
At its barebones basics, there are three layers and therefore three potential spaces to test — the data, the backend, and the front end.
The data layer is the foundation of everything. Why? Because everything is data. Your backend processes the data and your front end displays it. Without a good architectural structure in the way your data is logically stored, your backend is going to have a hard time keeping up with demands from the business.
That’s why when it comes to testing, you need to stress test your data structures first before you start coding.
The issue that many organizations face is that they just start creating the backend right away, based on the current data infrastructures they’ve got without first assessing if it is fit for purpose. This can create long term debt and sunk costs for the business because data refactoring comes with its own sets of risks.
When data is ultimately everything, you don’t want to build more data on over normalized tables that made sense when it was originally created about a decade or two ago.
First, you need to decide if a new data ‘shape’ is required. By shape, we’re talking about if a new set of data is required to produce a particular result. For example, rather than calling fifteen different tables to generate a particular data set, is it possible to compile and call only one?
The reason why tableless databases are so popular is that there is a lack of processing required by the backend when it is called. Often, there are intermediary cloud functions to create and sync data into duplicates. This is because data storage is cheap. Keeping data in sync becomes an orchestration of coordination.
If a new data ‘shape’ is required, it might pay to create new compiled data sets that are disposable. This will help decouple your application from the master datasets and reduce any side effects other already existing features.
The perk of using cloud functions to orchestrate the creation of your compiled data is that puts a watch on your data and only updates as needed. So if one of your tables changes, you only need to change the cloud function for that particular portion of your data rather than the entire backend that’s coded around it.
This methodology works particularly well on preexisting data architectures that are not necessarily structured in a way that’s easy to deal with. Rather than working directly with the data, you’re creating an additional data layer that is connected but separated at the same time.
When you are working directly with more than a dozen tables (which can happen in legacy systems), it can create a complexity issue for your backend. By ‘flattening’ the data processing required by reducing the number of tables you have physically deal with, it reduces the potential code needed by your backend.
A lot of traditional testing happens in the backend. A lot of developers also tend to get stuck into unit testing, which can be both a curse and a blessing, depending on your situation and application needs.
The point of testing is to catch defects. What’s a defect? A defect is when something does not process in a way that produces the expected outcome.
This can happen at any level in your code. However, we’re more likely to catch it in the backend because it is the space where testing is talked about the most.
The granularity of testing increases the closer to the code we are. When we move towards the business logic and requirements, we become more concerned with the shape of the result rather than the actual result.
So how granular should your tests be?
Well, it depends.
The more complex and component independent your code is, the more granular you need to go.
Modularity, cohesion and code coupling determines how tightly intertwined your application is between the different component. When there is a high level of dependency for a particular piece of code to work in order for another not to break, you’re going to need a higher and more detailed level of testing.
If your code is robust due to the simplicity of data passing through (that is, you’re not processing a dozen or so tables), you’re going to require less testing in the long run.
The amount of testing required in your backend is a flow-on effect from the complexity of your data source. The quickest way to reduce the amount of test code required and overall code, in general, is to reduce the complexity of your data structures — or at the very least, reduce it through compiled data.
The front end and testing is a funny space. You can test data but to visually test if something is in the right place based on a particular device requirement can be hard to orchestrate.
Yes, you can manually click around to make sure everything is working as it should — but the potential use case is much greater in terms of browsers and devices when compared to the backend and the data layer.
Because when it comes to the backend and the data layer, it’s a linear process. There are no tangents in the way things are called, processed, and compiled. Everything, on a technicality, is within your control.
But when it comes to the front end, you’re presented with a myriad of potential failure points — both in code logic and the presentation layer.
This is where the complication begins where manual testing is required to ensure that things are displayed correctly. Why? Because we still need to rely on the human judgment if that float to the left is behaving as it should on both Chrome and Microsoft’s Edge browser.
The process itself can be tedious unless it is automated in a particular way, using open source and free automation testing tools such as TestProject.
When it comes to testing, we can get too into the process of writing test cases that we forget the purpose of their existence. When you always have the end goal in mind, it’s easier to write broad test cases and architecture your code in a way that flows around that.
The process of starting with the big picture is that it gives you the flexibility to create necessary changes without being tied to a particular unit test you created before you fully understood and discovered the hidden requirements of your application.
The traditional and general suggested approach is to start with the unit tests. However, this process has proven itself as highly flawed by more experienced developers.
Why? Not every application and project your work on will be a Greenfield project, which means there will always be hidden constraints that don’t make an appearance until you start to get your hands dirty with the code.
This is why starting with broad tests and implementing granular testing units later down the track works better for your overall workflow. The granular tests are more insurance policies to your current code than actual reasons for why you need to code in a particular way.
Writing too much code is an issue that we all face in our careers as code creators. Sometimes, it’s due to a lack of knowledge about a particular technique, feature, design pattern, or idea.
Sometimes, it’s due to the complexity of the current code we’re working with. Sometimes, it’s due to the data we’re given and the processing required to extract what we actually need to create the outputs required.
When it comes to code, it starts with data and ends with data. To reduce the amount of code required, and therefore the associated tests that come with the code creation process, reduce the complexity of the data you’re working with.
Flatten the structures, data sets, processing required, and whatever else you can do to reduce amount of code condensing into a particular space.
While it may seem like you’re just distributing your code over the different layers, you mitigate the risk of having it all in one place.
Smaller points of failure are easier to deal with than major breakdowns. They are also easier to detect and fix. It also reduces the number of test units required within a particular layer. The perk of mitigating the risk across the different parts and layers of the application is that it reduces the trap of writing tightly coupled unit tests that are brittle change.
Change is inevitable when it comes to coding. When the business changes, evolves, come up with a new feature or idea, the code created is either going to be additive to the current structure — or it’s going to change it.
When change happens, test cases often no longer apply, making their existence redundant.
When change happens on a regular basis, writing test cases that are too granular can result in wasted effort. The best thing you can do to prevent code waste is to simply have balance — not just in your code but in the data you consume and produce.
Happy Testing! 😎