Test contexts are a way to simplify certain interactions with a system so that the test is unburdened with technicalities. The results are simple, clear, easy-to-read tests that fully describe the test scenario. These test contexts are often the result of one or more refactoring cycles that remove duplication. In a way, they are a form of both ‘test ignorance’ and ‘test fixture’, as they remove details from the test body that are often related to interacting with a test fixture; therefore, they become a test fixture themselves.
The following code smells are logical missteps during developing test contexts. The trick is to spot the pattern.
Test is being too dependent
Tests that are too dependent on a test context usually don’t have much in their test body. They fully rely on the test context to provide and interact with the system. The following ‘potato example’ shows this:
👀 Notice that there is not a single input where the test has to give the test context. Everything is done. The problem that this causes is that the test context becomes ‘too specific’. It will have lots of mixed functionality for both handling the interaction with the system and providing meaningful functionality to the test. Maintaining such a test context will be hard and changing the tests’ contents will be even harder.
You can spot this code smell when there are no inputs given to the test contexts. Often, you will see some kind of provided method that has almost the same name as the test name. In those circumstances, you probably have a test that is too dependent.
Compare this to the following example, where the type of sunlight is extracted from the test context:
💡 It all depends on which types of tests you have elsewhere. An extra overload where you can specify the frequency of sunlight could be present too. Just make sure that all of the things the test needs to know about are visible in the test body and not hidden in the test context.
Control freak tests
Tests on tests that are too controlling usually have a lot in their test body. The setup or assertion could be very large, meaning that the required steps become unclear.
The following ‘potato example’ shows this:
Integration (and other types of tests, too) are written like this in the first phase. The problem is that people tend to leave them like this. Although it might work, it does not show clearly what the test is trying to do and is hard to maintain. Often, those tests have just as an obscure test name that does not provide the test reader with any additional information.
When you cannot determine at first glance what the test is doing, if you are quickly distracted from test dependencies, you may have a test that is too controlling.
👀 Look for related functionality, and look for certain inputs that the test requires to verify its result. Often your system will have several technical names to interact with certain functions or objects, but that does not mean that those should (always) be transferred to the test.
In this case, there are two inputs: the soil and the water frequency. Those two determine if the plant can grow:
💡 Refactoring towards this kind of setup will unburden your test body a great deal and often show possible reusability for other tests. What I found while doing this kind of refactoring, is that because the scenario is now clear, it brings inspiration to what is missing from the test suite. Once you clearly specify the test scenario, you will see what the scenario does not cover. And because the test setup is now simplified, it will take little effort to add those.
The two code smells for test contexts are opposites of each other and bring back balance in your test context and test body. Finding this balance is key. It could be that this balance shifts during test development or because another set of tests is added with similar requirements. Either way, it is always a good thing to keep your test scenario in mind and see if your test context helps – but not too much – in accomplishing this.
Thanks for reading,
Subscribe to our RSS feed