Writing software tests is one of the most important steps throughout the software development lifecycle. Preventing regressions and validating that business logic works as expected are much-needed guarantees when building a software product. Customer feedback should never be the primary source of learning that something does not work, at least this is what we're always telling ourselves. But too often, we're lacking a sufficient testing infrastructure and above all, actual tests to run. Why is that, and how can we solve it?
Neglecting the urge to invest time and resources in building a testing pipeline leads to unexpected (and unnecessary) headaches later on. If your team's sprint was ever extended in scope due to regressions that could have been avoided early, you'll know the feeling. Sometimes the product roadmap might be more important than spending time on reaching reasonable test coverage, or is it?
Over the past year, I worked on many services and infrastructure parts around our product at Hygraph, gaining first-hand experience on the problems most teams face in regards to testing when moving fast on building a product. What I learned is that, above all else, the real challenge is to establish a testing culture.
While it might sound abstract at first, the idea of a testing culture is straightforward: Since we all agree on the necessity of having a test framework in place, every developer should ideally strive to write tests while working on a feature, no matter whether we're adding new business logic or fixing a bug. With this mindset, we'll get to the next point quickly: We often choose tools, libraries, and other utilities based on developer experience (DX). This means, that using them should be a pleasant experience making you more productive. The same goes for testing: When time is critical, you need to be able to write and run tests quickly. So to conclude this part, we need to provide a delightful way to write and run tests with as little overhead and friction as possible, otherwise, they will be neglected. This means getting a lot of things right, but we're not quite done yet.
Now that we know that setting up a developer-friendly testing system is vital to building a culture where everyone loves to write tests, we need to schedule a set of actionable tasks to set this up. How do we want to write tests? Which testing framework would we like to utilize for end-to-end tests? When are tests run, can we automate and/or schedule tests to run on every code change? All these (and many more) questions will arise when we take time to think about the perfect test flow. This is an individual effort for every team, as some decisions are highly-opinionated. But the decision to schedule the time necessary is not.
When we want to achieve satisfactory coverage to avoid regressions and bugs surfacing against the schedule, we need to build a foundation for testing early on, giving it the attention it deserves, which should be equally important as product work. And while this might sound like we're wasting time that could be more well-spent in developing features, this is a one-time effort that, done the right way, will prevent a lot of trouble and frustration later on.
One observation throughout my team's journey was that we'd often vocalize the intent to write more tests and spend time on improving the foundation of our testing architecture. In itself, this is a great sign because it shows that the testing culture is already present with people wanting to write more tests. The next step, in this case, would have been to schedule actionable tickets to improve the parts we didn't like and a general shift towards including writing tests into our acceptance criteria (or definition of done).
Once we're happy with the tests we wrote, we'll quickly notice that some test suite might take some time to run to completion. I've seen integration and end-to-end test suites that would run for hours. This does not have to be a problem, but it makes it harder to run all tests for every change to validate that we didn't break another part while fixing a bug, for example. Ideally, we should strive to find a balance between being able to run the most important tests every time, as well as running all tests to make sure everything works regularly. And again, we're playing the game of scheduling tickets to work on automating test execution and reporting, as well as other CI flows.
By the time you've reached this section, you will probably have noticed that most of the work of establishing our testing culture lies in convincing stakeholders that we need to prioritize actionable tasks to build the necessary infrastructure and onboard the team early on. To settle this once and for all, we're not hacking together a side project for fun here, we're building business-critical infrastructure. As a software company, (automated) quality assurance depends on a viable testing strategy. I would argue that nobody in charge would like to be slowed down by regressions surfacing repeatedly or customers churning because they're the ones finding the bugs.
As always, thank you for reading this post! If you've got any questions, suggestions, or feedback in general, don't hesitate to reach out on Twitter or by mail.