NBomber test scenario vs Expecto tests
NBomber and Expecto are both F# implemented frameworks for writing tests. Expecto is the fundamental, flexible framework to describe a test, while NBomber is the tool through which to run complex test scenarios. Examples of NBomber usually show how this can be used in a console app, but not in an existing testing framework. In a previous post, I described how we can integrate NBomber in xUnit and how we almost immediately run against the limitations of xUnit.
Expecto is far more flexible in how and when tests are described. Unlike xUnit, it can describe test scenarios as regular tests. To me, this is a real missing value compared to the previous setup. By pairing a test scenario with a test outcome, you can immediately see which test scenario(s) failed from the built-in test tools. Yes, the load tests will probably be looked at in detail anyway, but that does not mean that the regular test outcome should not be correctly reported.
Dedicated test scenario model
As a (load) test could contain an assertion phase, we need an interaction model between what NBomber expects and what Expecto asserts upon. This assertion part could also contain some additional checks, to make sure that the report that NBomber generates does not represent false positives.
The model is a simple combination of the test scenario and the assertion function:
We can create a simple ‘translation’ function to add an assertion to a test scenario:
This will make sure that we secretly convert to a test scenario model while hiding the temporary model itself. There are numerous ways to make this translation, and I even experimented with a composition builder for a while. However, we should keep in mind that the focus should be on the NBomber scenario model and the syntax it uses. Any deviation adds complexity.
Introducing test load translation layer
The most complex part of the exercise follows this. The goal is to map scenario outcomes to test outcomes. Therefore each NBomber scenario should become a test. As NBomber will run the scenarios and not Expecto, the way these two frameworks interact is a little bit different from regular test customization.
This is what I came up with:
As shown in the sequence zipper and mapper, we project each scenario to a
testCase function with the custom-provided assertion function against the related scenario stats.
❌ The downside of using this approach, is that the test duration will not correspond with the duration the scenario took to run. That is because NBomber has its own runner and is not by default supported for tests like these — which is of course the whole point of this exercise.
✅ The plus side is that we now have a clear test outcome based on successful scenarios – something that was really lacking previously. Each scenario is a test.
💡 An idea to increase the usefulness of this translation function, is that we have full control of the number of tests we generate. As mentioned in the previous post, it is a good idea to include smoke tests for your load test scenarios so that you know what caused the test to fail at all times (interaction with the system = smoke test fail, unable to handle load = load test fail).
This way, you can filter out the smoke tests for special runs during defect localization too.
Input-based test scenarios
Load testing usually comes with several test fixture setups/teardowns based on the part that the scenario is testing. A complex input will probably impact the system more than a simple input, for example. Sending a single simple/complex message may not make much of a difference, but 1000+ messages might. So, there needs to be a setup/teardown for setting up the system based on these ‘plans’.
My previous post talked about the limitations of xUnit and that we needed to pass along all these plans in a single go. It was a little less structured and not really scalable for more than +/- 5 plans. Since the xUnit attributes only work with constant data, you could create separate data members or even data classes, but this will only increase complexity and we will be farther away from home.
Luckily, Expecto allows us to be more flexible. We can just loop through our test plans and create a test scenario for each.
The cool thing about this is that we can now use the
input to set up any specific test fixture and tear it down using the
What bothers me with this, however, is that I do not want to manually change the test scenario’s name to reflect the input. That is why I created an additional
Scenario.createMany function that automatically adds the type of input to the test scenario’s name.
👀 Notice that I used an Aether lens
TestScenario.name_ to map the test scenario’s name.
This concludes the Expecto – NBomber experiment.
The test body of an NBomber load test is the creation of a test scenario model. Everything after that is about registering all the scenarios and running them. What this experiment did was take the idea and provide the necessary plumbing functions to make this happen in an Expecto environment. The test bodies in the latest example only contain the scenario and assertion. There is no direct relationship with any running functionality: we only describe test scenarios. It should be clear from these scenarios what kind of load the system is tested against and how that corresponds with any additional assertion. Plus, we have automatically generated smoke tests on the test scenario’s execution, all hidden from the main testing workspace. That is true test ignorance.
Thanks for reading!
Subscribe to our RSS feed