all Technical posts

Preparing Test Suites for Acceptance Testing

Testing is the (only?) way to be sure that software is doing what it is supposed to do. Acceptance testing is the way to verify if the software is doing what the user is expecting it to do. This post goes over how to prepare your test suite for this.

The gap between business profiles and testing

Acceptance test cases are set up together with a client, because the client determines what requirements are needed. Translating these into simple (non-technical) words is the first step in determining the necessary test cases. The important part here is that the test cases are understandable for both business and technical profiles. Common domain language (domain-driven design, DDD) is critical for this.

The next step is to translate these explanations into technical specifications and code.

👀 This post does not use SpecFlow as a ‘fix all’ solution, as writing any type of test should be language-independent.

Provide readable test names

The difference between technical, lower-level tests and acceptance tests is the words that describe them. Any type of test should have readable and quickly identifiable names. Acceptance tests are different: it could be that their explanations are not always translatable to code one-on-one, especially as most testing frameworks use the member names of the tests as the test name.

There are several options to close the gap between the agreed-upon test case sentences and technical implementations:

  • Expecto: Fully supports sentence structure as test names.
  • xUnit:
    • Use DisplayName on the tests to have the test case name split from its technical name.
    • Use GivenThis_WhenThat_ShouldBe() test names + configure the xunit.runner.json with the methodDisplayOptions: replaceUnderscoreWithSpace.
  • SpecFlow: Describe the test case that translates to mutable ‘hooks’ in the code.

My personal choice is Expecto, as it already has the option to use full text as the name of the test case, without any extra setup. The other options require either extra configuration or extra test management/training besides the test framework itself.

Provide readable test input names

As with the test names, any test input should be quickly identifiable. This is especially the case for data-driven tests, where the input becomes part of the test name. Some testing frameworks use the default value representation when parsing to a text format. Be aware of this and if necessary provide a custom description of what the input represents within the acceptance test case.

  • Expecto: F# records are by default visible by their values. DUs can help with both providing meaningful descriptions and functionality as test input.
  • xUnit: Best wrap values from [MemberData(...)] and other attributes in an extra layer of indirection, overriding the ToString() method to provide a meaningful description of the input.
  • SpecFlow: Uses regular expressions to pass primitive inputs to the mutable ‘hooks’, but is limited to primitive types: more complex types require extra work or enumeration-to-functionality to provide meaningful descriptions.

Provide readable test assertions

The context given to a failed acceptance test case is very important. The test failure may include the full detailed version of the test case description, explaining what is expected.

The actual failure should, like any other type of test, be understandable in such a way that a good guess can be made about where the problem resides. Since these tests handle entire applications, logging, and user test messages are extremely important, as they are the first place to look for the problem and defect localization.

Provide readable summary test reports

Generating a report for acceptance testing should not include technical details like the kind of code framework that was used or the time it took to run the tests (except of course if that is one of the requirements of the acceptance test case).

If the tests have good test names, generating a summary report becomes easy. A simple list of successful test cases could be enough. If a new requirement and acceptance test case is added, it can be marked as ‘skipped’ in the report, clearly showing which scenarios are supported by the system.

💡 The full detailed description or a link to that description could be useful in this report as well.

Provide readable categories

Providing metadata like certain categories or tags/badges can be a good idea when grouping tests, or when quickly identifying a certain type of test. A tag/badge to add to all critical tests could, for example, show if a failing test should be given priority or not. This is especially true if we are dealing with a lot of tests and a lot of types of tests; it can be a good idea to tag them so that even without reading the test name, it becomes clear which response action should be taken.

  • Expecto: Has combinable testLabel functions that can add multiple tags easily to any test or group of tests.
  • xUnit: Has [Trait(...)] attributes to decorate tests, but are not always shown in the test report.
  • SpecFlow: Has @... tags to decorate certain test scenarios.

Conclusion

There is a gap between what the test case describes and how the tests are implemented. Different testing frameworks use different kinds of approaches to close this gap. This can be between giving good names for tests and their inputs, to providing readable reports for those tests. The important thing to keep in mind is that the tests should be technical enough for technical people to understand and adapt, but should generate a test report that is business-like enough for non-technical people to understand.

Thanks for reading!
Stijn

Subscribe to our RSS feed

Hi there,
how can we help?

Got a project in mind?

Connect with us

Let's talk

Let's talk

Thanks, we'll be in touch soon!

Call us

Thanks, we've sent the link to your inbox

Invalid email address

Submit

Your download should start shortly!

Stay in Touch - Subscribe to Our Newsletter

Keep up to date with industry trends, events and the latest customer stories

Invalid email address

Submit

Great you’re on the list!