logo logo

5 Sensible Defaults to Enhance Quality Delivery

5 Sensible Defaults to Enhance Quality Delivery

In this blog post, I am going to present a collection of sensible defaults that have been observed to have a positive impact on quality delivery 📈 In case you are wondering why I’m calling them sensible defaults and not best practices, well it’s because I honestly feel best practices are overrated and very context-specific.

What might be a best practice for one project may not necessarily be a best practice for another project, it totally depends on the needs of that project. When you set up an OS or install an app, you get a set of default settings to start with, which you can further modify based on your needs.

Sensible defaults work in a similar fashion. These are practices that, when followed, have been found to yield fruitful results almost always, and you have the flexibility to customize further for even better results.

Before you read further, I want you to take a minute and think about the following situations:

  • A lot of defects are caught after a user story moves to testing
  • Developers saying that a particular requirement was never supposed to be a part of a story, so they didn’t develop it
  • Testers validating and logging defects for something that was never even a part of the story
  • BAs assuming the feature doesn’t require any cross-team collaboration and doesn’t have any dependencies on other teams
  • The client/PO saying the developed feature was not as per his/her expectations
  • A story development blocked mid-way due to dependency on another team/vendor

If you closely observe, you will find that almost all of these situations could have been avoided if there was close collaboration and proper communication between the Devs, QAs, BAs, and POs right from the beginning.

The sensible defaults that I’m going to talk about next, are ways to establish common grounds between the various amigos, keeping everyone in the team on the same page, and ensuring that what’s being actually delivered is what’s expected.

Additionally, they provide an efficient way to timely convey any early feedback (Shift left much eh? 😜). Now, let’s take a look at what are the sensible defaults for a successful delivery.

Table of Contents

1. Kick-off

Sensible defaults - Kick-off

This is where the Dev, QA, and BA meet to discuss and align on expectations/outcomes from the story. This should be conducted after a story becomes ready for development (i.e., when the analysis is completed and there are no blockers), and before a story moves to development.

The goal is to establish a common understanding between the amigos, identify risks and validate all the assumptions, if any 🎯 Kick off often takes the form of a checklist (as mentioned below). You can add or remove items as per the needs of your project:

  • Is everyone clear on the value of the story?
  • Are there any assumptions or callouts that need to be communicated to stakeholders?
  • Are there any dependencies on another team/vendor?
  • Is the story dependent on any other story which is not ready for development yet?
  • Are there any test data requirements?
  • Are the ACs (Acceptance criteria) well defined by the BA and reviewed by the QA and Dev?
  • Do we have enough technical information to play the story?
  • Does the story estimate look fine, or a re-estimation is required?
  • Are there any wireframes or mockups that the client has shared? (For UI-based story)
  • Are the event payloads well-defined? (For a microservice-based story)
  • Are the source data and its schema rightly identified? (For data ingestion story)
  • Is the data to be ingested classified as PII or non-PII? (For data ingestion story)
  • Are there any CFRs (cross-functional requirements) like performance, security, accessibility, etc. associated with the story?
  • Is there any test documentation needed for knowledge management?
  • Do the QA and Developers have all required access to the systems and environments?

Kick offs can lead to some very initial feedback, for example – BA needs to add more ACs, the story needs more grooming, the client needs to be intimated about certain data requirements, the client needs to provide certain accesses for some folks, etc.

2. Deskcheck

Deskcheck

Deskchecks are called on by the developer(s) for a quick walkthrough of the story they have worked upon, and to demonstrate that they conform to the ACs. It involves the same folks who were involved in the kick off.

This happens typically after developers have finished working on the story, and code is not yet deployed to the test environment, i.e., it is done on the developer’s local system or the dev environment. Like kick off, this also takes the form of a checklist 📝 (mentioned below):

  • Does the story meet all the ACs (Acceptance criteria)?
  • Have all callouts/assumptions/risks been acknowledged by the stakeholders?
  • Have unit tests and integration tests been written?
  • Does the story have enough test coverage?
  • Does this story have an impact on any other areas (Upstream-downstream services)?
  • Is it consistent amongst all major browsers?
  • Is one round of sanity testing done by the Devs?
  • Is the code readable and follows set standards?
  • Has the code been peer-reviewed?
  • Are all configurations and setup in place to begin exhaustive testing?

The goal of a deskcheck is to inspect the solution quickly and provide any early feedback in the story life cycle, shortening the feedback loop ➰ I have seen a lot of issues getting identified and fixed during deskchecks, through active discussions between the amigos.

3. Deployment Plan

Ideally, all deployment steps should be part of your CI/CD pipeline, however often there are instances where you would need some manual intervention before and after deploying the code to an environment. This is most common with migrations and upgrades.

The manual activities needed before deployment can include (but are not limited to) – taking a backup of the data to be migrated, turning off some scheduled data pipelines during deployment, noting the count of records in the table before deployment (to verify the count later), etc. These can be referred to as pre-deployment steps.

Once the pre-deployment steps are completed, the CI/CD pipeline is triggered, and the code is deployed to the intended environment. After a successful deployment of the code, there could be certain validation activities like validating the count of records, running a sanity check to ensure everything looks fine, etc.

In a data platform project, you would typically need to run the new ETL pipelines once the deployment is complete to generate new tables. Due to security concerns, there could also be a need to delete the previously backed up data after a few days of deployment. These can be referred to as post-deployment steps.

Often all these steps are carried out by the developer while deploying to lower environments, without being documented anywhere. This makes it incredibly difficult during the production release to figure out what were the exact deployment steps, especially if that specific developer isn’t available that day 📅

Hence, it is of utmost importance to have a well-documented deployment plan with all the pre and post-deployment steps for each story. This serves as a handy “Path to production” and should be executed by QAs during UAT deployment.

A sample deployment plan for a data-based story that regenerates the users table from a different input source is given below for reference:

  • Backup existing users’ table data in an s3 bucket and note the backup location (this will help in rollback if required)
  • Take the total count of records for the users table
    • Select count (*) from “users”
  • Deploy data-pipeline code (run CI/CD pipeline)
  • Run to DAG (ETL pipeline) to generate users table
  • Validate users table data, and total record count from step 2 (this should be a match)
    • Select count (*) from “users”
  • Monitor the schedule DAG (ETL pipeline) for T+2 days
  • Delete backed up user table data (after T+2 days), a location mentioned in step 1

*T=date of deployment

Note: Do make sure that the deployment steps for all the stories that are part of the release are collated together in the release plan.

4. Rollback Strategy

Let’s say you have documented your deployment plan very well and the D-day has arrived when you have to release the code to production. You proceed with the deployment steps and after a few minutes you observe that your deployment pipeline is failing in production (you really can’t predict anything with software, can you? 🙈).

Wouldn’t it be relieving to have a strategy in place to revert the latest deployment whenever something goes wrong in production during or after deployment? You can carry out the rollback, restore the systems and continue to investigate the root cause due to which issue occurred in the first place.

This minimizes the chances of any client-facing impacts and reduces downtime. The goal of a rollback is to always return the systems to a known stable state. Hence, you should always have a documented rollback strategy for every story that is part of the release, and the QAs must at least once test out the rollback process in any of the lower environments.

A sample rollback strategy for a data-based story is given below for reference:

  • Revert all the changes for the user story from the release branch and deploy the CI/CD pipeline again
  • Once the code is reverted, restore the backup for users table from the s3 location and re-run the DAG (ETL pipeline) for the previous day (T-1)

*T=date of deployment

5. Showcases

This agile ceremony is used to demonstrate the working software to the stakeholder. A couple of folks from the team can sign up for the showcase depending on the number of stories to be showcased

A showcase should be done at the end of each sprint or whenever a demonstrable chunk of stories is completed, with the goal to solicit timely feedback from the stakeholders. It keeps them informed about the progress of a particular feature and instills their confidence.

It also serves as the perfect platform to engage them; they can play around with the working software and confirm if their experience is in line with what was promised.

Someone from the team should be identified in advance for taking notes during the showcase. After a showcase, it is important to carve out stories for future iterations from the feedback received. The showcase notes serve handy while creating the new stories.

Conclusion

We have seen how each of these sensible defaults leads to fast feedback, shared accountability for quality (between the amigos), and reduced cost of delivery by uncovering defects earlier, providing the best possible experience for the clients.

They also help in managing risks in a better way through proper communication and enabling the team with the means to recover from issues faster. I would like to wrap up this blog with a callout – there may well be many other sensible defaults that can improve the delivery quality, but I have found the ones above to be most effective based on my experience ✅

About the author

Soumyabrata Moitra

Soumyabrata is a passionate tester with a love for tool-assisted testing. He has 8 years of experience in the IT industry, with exposure to various domains like Insurance, Healthcare, E-Commerce and Fitness. He holds a bachelors degree in Computer Science and is currently working as a Senior Consultant (QA) in Thoughtworks.

LinkedIn – https://www.linkedin.com/in/soumya27/

Talk on Mutation Testing – https://www.youtube.com/watch?v=UaiI0uHR0s4

Leave a Reply

FacebookLinkedInTwitterEmail