Common damage report
Many projects suffer from a lack of testing. Time, budget and motivation to write them are common reasons they are ignored. The damage comes later, when the project is actually used for the first time, and obscure errors and bugs pop up. At that stage, it can be very hard to see the cause of the problem in the mass of untested code. A lot of time is wasted, all of which could have been avoided if the code had been tested.
In a worst-case scenario, there are no tests at all. Sometimes there are only unit tests. ‘Unit’ testing is only the very beginning, but they are often used as an excuse not to write anymore. The introduction tests are the no-brainer tests. The first interaction with your software fleshes out some common errors and makes sure that the very basic interaction of a single unit holds. However, this is by no means a fully tested project, no matter how much of them you write.
Not-tested code does not work
One of the mantras I tell myself when working on a project is this: any code that is not interacted with during testing does not work. You should fully rely on your tests for the working of your project. No manual intervention or manual interaction can compete with a test. This mantra brings out the paranoia in each of us. It assumes that none of your code works. Only by building test coverage can we assume that parts of the code base work.
This is a test-infected mentality that will grow once you start seeing your code base like this. There are ‘acceptance tests’ that make sure that the client’s expectations are met. The same should be the case with the rest of the code base. Any assumption or doubt should be addressed with a test.
No automation and you're doomed to deprecation
However, this is not enough. You could have a whole test suite, testing the entire code base, and still be at risk of seeing errors and bugs in the end product. A test becomes obsolete when it is not automated. Software changes rapidly, which means that the tests should grow with it. A good test suite makes sure that it can handle changes in the production code, but once in a while, there needs to be a refactoring cycle in the test suite, too.
Automating every single one of your tests brings out the real added value of testing in general. Simple, fast tests could be part of the CI pipeline. If we stay below 10 minutes, we can even add the integration tests to these. Otherwise, a scheduled build for any long-running test is in order. The test status of performance testing or security testing suites should also be handled correctly. A test failure should result in an automatic work task, (email) notification, or any other system to notify the team. An automated test that is not managed correctly is as bad as no test at all.
Many flavors, many purposes
As mentioned, there are different type of tests, each with a different purpose. Because of this, only writing unit tests is not enough. It barely touches the surface.
At Arcus, we have a mentality of writing unit and integration tests for each new/adapted feature. These tests are part of the CI build, which means that any change will go over the whole process when a new PR (pull request) comes in.
Just like code does not work if it is not tested, other areas won’t if they are not verified. Is the software fast enough? If you do not have performance tests, you won’t know. Is the software safe enough? If you do not have security tests, you won’t know. How is your software behaving as sub-parts and as a whole? If you do not have integration tests, you won’t know.
Often, I feel like a broken record with this topic. So many practices and ways of working have been created to introduce tests into projects: test-first mindsets, test-infected, failing-test approaches, and test-driven development. However, it often feels that mature testing habits are still not happening in a lot of projects. I hope this post has convinced you why this shouldn’t be the case!
Thanks for reading!
Subscribe to our RSS feed