wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Tuesday, August 8, 2017 12:35 PM

Toon Vanhoutte by Toon Vanhoutte

Should we create a record or update it? What constraints define if a record already exists? These are typical questions we need to ask ourselves when analyzing a new interface. This blog post focuses on how we can deal with such update / insert (upsert) decisions in Logic Apps. Three popular Logic Apps connectors are investigated: the Common Data Service, File System and SQL Server connector.

Common Data Service Connector

This blog post of Tim Dutcher, Solutions Architect, was the trigger for writing about this subject. It describes a way to determine whether a record already exists in CDS, by using the "Get Record" action and deciding based on the returned HTTP code. I like the approach, but it has a downside that it's not 100% bullet proof. An HTTP code different than 200, doesn't always mean you received a 404 Not Found.

My suggested approach is to use the "Get List of Records" action, while defining an ODATA filter query (e.g. FullName eq '@{triggerBody()['AcountName']}'). In the condition, check if the result array of the query is empty or not: @equals(empty(body('Get_list_of_records')['value']), false). Based on the outcome of the condition, update or create a record.

File System Connector

The "Create file" action has no option to overwrite a file if it exists already. In such a scenario, the exception "A file with name 'Test.txt' already exists in the path 'Out' your file system, use the update operation if you want to replace it" is thrown.

To overcome this, we can use a similar approach as described above. Because the "List files in folder" action does not offer a filter option, we need to do this with an explicit filter action. Afterwards, we can check again if the resulting array is empty or not: @equals(empty(body('Filter_array')), false). Based on the outcome of the condition, update or create the file.

You can also achieve this in a quick and dirty way. It's not bullet proof, not clean, but perfect to use in case you want to create fast demos or test cases. The idea is to try first the "Create file" action and configure the next "Update file" action to run only if the previous action failed. Use it at your own risk :-)

SQL Server Connector

A similar approach with the "Get rows" actions could also do the job here. However, if you manage the SQL database yourself, I suggest to create a stored procedure. This stored procedure can take care of the IF-ELSE decision server side, which makes it idempotent.

This results in an easier, cheaper and a less chatty solution.

Conclusion

Create/update decisions are closely related to idempotent receivers. Real idempotent endpoints deal with this logic server side. Unfortunately, there are not many of those endpoints out there. If you manage the endpoints yourself, you are in charge to make them idempotent!

In case the Logic App needs to make the IF-ELSE decision, you get chattier integrations. To avoid reconfiguring such decisions over and over again, it's advised to make a generic Logic App that does it for you and consume it as a nested workflow. I would love to see this out-of-the-box in many connectors.

Thanks for reading!
Toon

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Friday, July 28, 2017 4:59 PM

Toon Vanhoutte by Toon Vanhoutte

Recently I was asked for an intervention on an IoT project. Sensor data was sent via Azure IoT Hubs towards Azure Stream Analytics. Events detected in Azure Stream Analytics resulted in a Service Bus message, that had to be handled by a Logic App. Hence, the Logic Apps failed to parse that JSON message taken from a Service Bus queue. Let's have a more detailed look at the issue!

Explanation

This diagram reflects the IoT scenario:

We encountered an issue in Logic Apps, the moment we tried to parse the message into a JSON object. After some investigation, we realized that the ServiceBus message wasn't actually a JSON message. It was preceded by string serialization "overhead".

Thanks to our favourite search engine, we came across this blog that nicely explains the cause of the issue. The problem is situated at the sender of the message. The issue is caused, because the BrokeredMessage is created with a string object:

If you control the sender side code, you can resolve the problem by passing a stream object instead.

Solution

Unfortunately we cannot change the way Azure Stream Analytics behaves, so we need to deal with it at receiver side. I've found several blogs and forum answers, suggesting to clean up the "serialization garbage" with an Azure Function. Although this is a valuable solution, I always tend to avoid additional components if not really needed. Introducing Azure Functions comes with additional cost, storage, deployment complexity, maintenance, etc…

As this is actually pure string manipulation, I had a look at the available string functions in the Logic Apps Workflow Definition Language. The following expression removes the unwanted "serialization overhead":

If you use this, in combination with the Parse JSON action, you have a user-friendly way to extract data from the JSON in the next steps of the Logic App. In the sample below, I just used the Terminate action for testing purpose. You can now easily use SensorId, Temperature, etc…

Conclusion

It's a pity that Azure Stream Analytics doesn't behave as expected when sending messages to Azure Service Bus. Luckely, we were able to fix it easily in the Logic App. Before reaching out to Azure Functions, as an extensibility option, it's advised to have an in-depth look at the available Logic Apps Workflow Definition Language functions.

Hope this was a timesaver!
Cheers,
Toon

Categories: Azure
written by: Toon Vanhoutte

Posted on Monday, July 24, 2017 8:24 AM

Stijn Moreels by Stijn Moreels

In this part of the Test Infected series, I will talk about the ignorance of tests and how we can achieve more ignorance.
The short answer: if it doesn’t contribute to the test, hide/remove it!
The reason I wrote this post is because I see many tests with an overload of information embedded, and with some more background information people may increase the ignorance of their tests.

Introduction

In this part of the Test Infected series, I will talk about the ignorance of tests and how we can achieve more ignorance.

The short answer: if it doesn’t contribute to the test, hide/remove it!

The reason I wrote this post is because I see many tests with an overload of information embedded, and with some more background information people may increase the ignorance of their tests.

Ignorance

Test-Driven Discovery

Sometimes people write tests just because they are obliged to do so. Only when someone is looking over their shoulder they write tests, in any other circumstances they don’t. It’s also crazy to see people abandon their practices (TDD, Merciless Refactoring, …) the moment there is a crisis or a need for a quick change or something else that causes stress to people.

The next time you’re in such a situation, look at yourself and evaluate how you react. If you don’t stick with your practices in those situations, then do you really trust your practices at all?If you don’t use your practices in your stress situations, abandon them, because they wouldn’t work for you (yet).
This could be a learning moment for the next time.

Now, that was a long intro to come to this point: what after we have written our tests (in a Test-First/Last approach)?

Test Maintenance

In my opinion, the one thing Test Ignorance is about, is Test Maintenance. When there’s some changes to the SUT (System Under Test), how much do you have to change of the production code and how much of the test code?

When you (over)use Mock Objects (and Test Doubles in general), you can get in situations that Gerard Meszaros calls Overspecified Software. The tight-coupling between the tests and the production code is causing this Smell.

But that’s not actually the topic I want to talk about (at least not directly). What I do want to talk about are all those tests with so much information in them that every method/class/… is Obscured.

People read books about patterns, principles, practices… and try to apply them to their Production Code, but forget their Test Code.

Test Code should be as clear than the Production Code.

If your method in production has 20 lines of code and people always lose time to read and reread it… (how many times do you reread a method before you refactor?), you refactor it to smaller parts to improve usability, readability, intent…

You do this practice in your production code; so, why wouldn’t you do this in your test code?

I believe one of the reasons people sometimes abandon their tests, is because people think they get paid for production code (and not because of their lack of discipline). It’s as simple as that. But remember that you get paid for stable, maintainable, high quality software and you can’t simply deliver that without tests that are easily maintainable.

"Ignorance is Bliss" Patterns

“I know this steak doesn't exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize? Ignorance is bliss.”

- Cypher (The Matrix)

Now, when you understand that your test code is just as much important than your production code, we can start by defining our Ignorance in our tests.

There are several Test Patterns in literature that support this Test Ignorance, so I’ll give you just the concepts and some quick examples.

This section is about readability and how we can improve this.

Unneccessary Fixture

The one place where you could start and where it is most obvious there’s a Test Smell, is the Fixture Setup. Not only can this section be enormous (I’ve seen gigantic fixtures) and hard to grasp, there’re also hard to change and so, to maintain.

Look at the following example. We need to setup a “valid” customer before we can insert it into the Repository. In this test, do I really need to know what all the different items are to make an invalid customer. Do we need all of them? Maybe it’s just the id that’s missing, but that could be autogenerated, or maybe the address doesn’t exist, …

Only show what I need to make the test pass.

We can change the example with a Parameterized Creation Method as an example of the One Bad Attribute Test Pattern. In the future, we could also parameterize the other properties if we want to test some functionality that depends on this information. If this isn’t the case, we can leave these initializations inside the Creation Method for the customer instead of polluting the test with this unnecessary information.

Now, if we want to act as a fully Test Infected person, we can also Test-Drive these Creation Methods. Next time you analyze the code coverage of your code, include the test projects and also make tests for these methods! This will increase your Defect Localization. If there's a problem with your fixture and not the test that uses this fixture, you will see this in your failed tests and know that you have a problem with the Fixture and not the test itself.

Also note that this newly created method is only accessible within this test class. If we want to write tests with the same Fixture, we can extract this logic in its own class.

In either way, we have made our intentions clear to the test reader. I always try to ask the following question to the test: “Do you really care if you know this?”.

Again, the test can be made clearer if we send the argument that makes the customer invalid with it, so we know what’s the cause why the customer isn’t inserted. If we move the “id” somewhere else, we won’t know what causes it and would made the test more Obscure.

I see some reasons why a Test Fixture can be big:

  • The Fixture has a lot of “setup-code” in place because the SUT is doing too much. Because the SUT is doing all these steps, we must build our Fixture with a lot of info and behavior. Otherwise, the SUT will see the Fixture as invalid.
  • The Fixture is the smallest possible for exercising the SUT and the SUT is a Complete Abstraction, but needs nonetheless a Fixture that needs some lines to setup before the Fixture is valid.
  • The Fixture contains some unnecessary information that doesn’t contribute the result of the test but is embedded in the test anyway.

So, there are a lot of different possibility why a Fixture can be big and the solution is for all these situations the same: make the Fixture as small as possible + only add the information to the test, relevant to the result of the test . Contribute or get out.

Now, if you move ALL the Fixture code somewhere else (so extracting too much), you also have a problem. Test readers will now see some Magic Fixtures in place that act as Mystery Guests which can result in Fragile Tests.

Obscured by Eagerness

Sometimes, I encounter tests that are “Obscured by Eagerness”. A Test can be “obscure” for different reasons. One can be because we want to assert too much in a single test, another can be because we want to “set-up” too much in a single run, and yet another can be because we combine tests in a single test run by exercising multiple actions on the SUT.

To summarize:

  • Eager Assertion: assert on too many state and/or behavior in a single run.
  • Eager Fixture: set up unnecessary fixture (see previous section).
  • Eager Exercises: exercise multiple actions on the SUT to combine tests.

I’ve seen people defend tests with more than 20 assert statements because they still tested a “single unit” outcome. Sometimes you have functionality that looks like you have to write 20 assert statements or more, but instead of writing those statements you should ask yourself: What are you trying to test?

By explicitly asking yourself this question, you often come up with surprising results.

Because the assert-phase of the test (in a Four Phase Test) is important to verify the state of the test (failed or succeed), I always try to start by writing this phase first. It forces you to think about what you trying to test and not what you need to set up as Fixture. By writing this phase first, you’re writing your test from bottom to top and only define what you really need. This way (like writing tests for your production code), you only write what you need.

Previous snippet is a perfect example of how we can abuse the Assert-Phase. By placing so many asserts in a single spot, we obscure what we really trying to test. We need to test if the message is serialized correctly; so instead of manually getting each element, why not assert on the whole xml?

We create an expected xml string and verify if this is the same as the actual serialized xml string.

Conclusion

Writing tests should be taken as serious as the production code, only then we can have maintainable software solutions where developers are eager to run tests instead of ignoring them.

The next time you write a test, try to think firmly about the test. What should I know, what do I find important to exercise the SUT, what do I expect… this way you can determine what items are important and which aren’t.

I sometimes "pretend" to be the test case:

“Do I care how this Fixture is set up?”
“Must I know exactly how to assert all these items?”
“Have I any interest of how a ‘valid’ object looks like?”
“What do I really want to test and what information only pollutes this?”
“Do I care that these actions must be executed before the exercise of the test?”

Tests only need to know what they need to exercise the SUT, nothing more, but equally important: nothing less!

Categories: Technology
Tags: Code Quality
written by: Stijn Moreels

Posted on Thursday, July 20, 2017 6:26 PM

Toon Vanhoutte by Toon Vanhoutte

Jon Fancey announced at Integrate the out-of-the-box batching feature in Logic Apps! This early morning, I saw by accident that this feature is already released in West-Europe. This blog contains a short preview of the batching functionality. There will definitely be a follow up with more details and scenarios!

In batching you need to have two processes:

  • Batch ingestion: the one responsible to queue messages into a specific batch
  • Batch release: the one responsible to dequeue the messages from a specific batch, when certain criteria are met (time, number of messages, external trigger…)

Batch Release

In Logic Apps, you must start with the batch release Logic App, as you will need to reference it from the batch ingestion workflow. This is to avoid that you are sending messages into a batch that does not exist! This is how the batch release trigger looks like:

You need to provide:

  • Batch Name: the name of your batch
  • Message Count: specify the number of messages required in the batch to release it

In the future, definitely more release criteria will be supported.

Batch Ingestion

Now you can inject messages into the batch. Therefore, I created just a simple request / response Logic App, that contains the Send messages to batch action. First you need to specify the previously created Logic App that is responsible for the batch release.

Once you've done this, you can specify all required info.

You need to provide:

  • Batch Name: the name of the batch. This will be validated at runtime!
  • Message Content: the content of the message to be batched.
  • Partition Name: specify a "sub-batch" within the batch. In my scenario, all invoices for one particular customer will be batched together. If empty, the partition will be DEFAULT.
  • MessageId: a message identifier. If empty, a GUID will be generated.

The result

I've just triggered the batch-ingest Logic Apps many times. This queues messages within the batch.

Each time 5 messages, belonging to the same partition, are available in the batch, the batch release Logic App gets fired.

The output looks like this:

Conclusion

Very happy to see this has been added to the product, as batching is still required nowadays. I thought this would have been part of the Integration Account; cool to see there is no dependency on that. The batch release process is not using a polling trigger, so this saves you also some additional costs.

I'll get in touch with the product group for some feedback, but this looks already very promising!

Categories: Azure
Tags: Logic Apps
written by: Toon Vanhoutte

Posted on Wednesday, July 12, 2017 9:45 AM

Stijn Moreels by Stijn Moreels

In this part of the Test Infected series, I will talk about how code is hard to test – both in a Test-First mindset and without.

Hard-to-Test Code

By “hard”, I mean anything that is uneasy, sloppy, frustrating or annoying, … anything that makes you sigh. All that can be categorized as “hard”.

TDD or Test-Driven Development is a lot more than just writing tests first, it’s also about Designing Software. You think about a lot of things during the writing of tests, and all those things together guards you from writing Hard-to-Test Code.

When I write tests, I want an easy way to exercise and verify the SUT (System Under Test). The less code I need to write to do that, the better. The clearer the test intent, the better. The easier the test, the better.

Obscured Fixture Setup

What do I mean with a Fixture in this paragraph? Anything that you need to do so you can exercise the SUT. This could mean that you first have to initialize the SUT with some valid arguments, it could mean that you must insert some Dummy Data into a datastore, it could mean that you must call some methods of the SUT...

According to Kent Beck: anything that “Sets the Table” for the SUT.

This section is about the maintainability of the Test Fixture and how we can improve this.

Discovery

With a Complex Fixture Setup, I mean that I must write a lot of code to “set this table”. I must admit that  I quickly assign a fixture as being “complex” – but that’s a good thing I guess.

Look at the following snippet. It’s a good thing that we “spy” Client instead of actually sending a mail, but also note that your eyes are attracted to the strings in the Attachments and Header of the valid message instead of the actual testing and verifying.

I don’t like complex, big, or hard-to-understand Fixtures. I want clear visual of what is tested and how. Nothing more, nothing less. Of course, I don’t know if you find this complex, maybe you don’t (because you wrote it), I just don’t like big methods I guess.

We have 16 lines, with 3 comments, 2 blank spaces, 2 braces, 7 lines of which are Fixture Setup.

Causes

You could think of many causes to having a Complex Fixture Setup.

  • You could have a Tightly-Coupled system which forces you to create all those extra objects and initialize them with the right values.
  • Your test includes information which doesn’t really matter in the context of the test; this way introducing a Polluted Test. This could happen in a Copy-Paste programming environment in which you just copy the Fixture of another test.
  • It could also happen if there wasn’t enough research done to maybe Fake the Fixture and thereby avoiding unnecessary setup code.

Impact

Now, we have a Complex Fixture – so what?

Let’s look at the impact a Complex Fixture could have on your development. What if I want to test some unhappy paths for the Client. What if we want to test the creation of the Message with Constructor Tests. What if we want to test with a single Attachment instead of two…

Al those tests would require a similar Fixture Setup.

If you have Cut-and-Paste developers in your team, you would have a lot of Test Duplication. Which again result in a High Test Maintenance Cost.
Besides the fact of duplication, it isn’t clear to me that we are testing a “valid message”. Does it have to do with the header value? Does it have to do with the attachments? Does it have to do with something else? …

What do I have to do to create valid mail message. Does this message require attachments? By not having a clear Fixture Setup, we don’t have a clear Test Overview.

Possible Solution

The first thing you should do is to eliminate all the unnecessary information from your test. If you don’t use it/need it – don’t show it.
I only want to see what’s really important to the test to pass.

If you, after that, still have a big Fixture to set up – place it in Parameterized Creation Methods so you can only send the values to the Creation Methods that are valuable for the test. This way you resolve the duplication of the tests.

Also make sure that you don’t have any duplication in your Implicit Fixture (typically in some kind of “setup” method or constructor) for setting up a Datastore for example.

Missing Object Seam Enabling Point

What is an Object Seam? Michael C. Feathers introduced this and called it: “A place where you can alter behavior without editing that place”.

Every Seam must have an Enabling Point where the decision for one behavior or the other can be made. In Object-Oriented languages, we can use this method by introducing Test Doubles by which we implement another version of the dependency or other kind of object we need to exercise the SUT.

Discovery

Not having an Enabling Point for the SUT makes our code Hard-to-Test. This can happen in many situations – especially when you have chosen some kind of design decision and everything must make room. (This sounds a bit like a Golden Hamer Smell to me).

Please don’t look at the names, it’s only an example.

The following example shows how the Message Service doesn’t contains any enabling point. We are bound to use the file system if we want to test the Message Service. A possible solution could introduce a Fake Datastore (probably in-memory) and send it to the Message Service.

Also note that we can’t even write a valid assertion. Yes, we could check if the File Based Data Store has written something on the disk. But I hope that you can see that this isn’t the right way to assert the Message Service.

We would have to write code that assert the functionality of the File Based Data Store and not from the Message Service.

Ok, it was a very “simple” example of a SUT could miss an Enabling Point. Let’s make it a tiny bit harder.

The File Sender always uses XML serialization/deserialization if we want to write a file. We always must use the Message Helper (what kind of name is that?) if we want to write a file.

These Utility Classes are the result of thinking in a Procedural way, not in a OOP way. Because all these static classes are used in this class, we have no choice than to also test them if we want to test the File Sender. If we Unit Test this class, we aren’t testing a “unit”, we are testing 3 (!) classes.

Whenever I see the name “helper” appear, I immediately think that there is room for improvement in the design. For a start, please rename “helper” to a more appropriate name. Everything could be a helper for another class but that doesn’t mean we must name all our classes with the prefix “helper”.

Try to move those functionalities in the appropriate classes. Our Message could for example have a property called IsEmpty instead of outsourcing this functionality in a different class.

Functional Languages have the same need to inject Seams. Look at the following example of the same functionality:

If we want to change the Message Helper or the serialization, we must inject functions in our “Write to File” function. In this case, our Stub is just another function.

Again, don’t look at the names, or functionality – it’s just to make a point on the Enabling Point of our Object Seams.

Causes

You could think of many causes of a Missing Enabling Point:

  • If the Dependencies are implemented right inside the SUT – which would indicate a Tight-Coupling (again); and therefore, we cannot implement our own version of the dependency.
  • Utility Classes are a result of Procedural Thinking in a OOP environment. (Yes, maybe there are some exceptions – maybe) which result in Static Dependency with the SUT. We cannot alter these behaviors of these static classes in our test.
  • The SUT may do a lot of work inside the constructor which always need to run if we want to exercise the SUT – thereby limiting our capabilities of placing an Enabling Point.
  • Having a chain of method calls could also indicate this Tight-Coupling only in a more subtle way. If we have a single dependency but we have to call three methods deep before we have the right info, we have the same problem. It violates the "Don't Talk to Strangers" design principle.

Impact

By constantly missing an Enabling Point for your Seam; you are making a design that isn’t reusable for other purposes.
Sometimes the reason behind the absence of Enabling Points lies in the way the responsibilities are spread across the classes and not wrapped in the right classes.

Maybe I’m a bit allergic to Utility Classes.

Possible Solution

Placing an Enabling Point in our SUT. That should be our solution. We need some kind of Observation Point we can use to alter behavior, so we can verify the outcome.

Note that the observation can be direct or indirect, just like the control point (or Enabling Point) of our SUT. Use different kind of Test Doubles to achieve these goals.

We use a Test Stub to control the Indirect Inputs of the SUT; we use a Mock Object for verification of the Indirect Outputs.

Classes which have private information, behavior, … these classes can maybe expose their information or behavior in a subclass. We could create a Test Specific Subclass which we can use to exercise the SUT instead of the real one.

But be careful that you don’t override any behavior you are testing. That we lead to False Positive test cases and would introduce paths in your software that are never exercised in a test environment.

In Functional languages, everything is a function; so, we could introduce a Stub Function for our injection of data, and a Mock and/or Stub Function for our response and verification, … so we have an Enabling Point for our SUT.

Over-Specified Software by Mocking

I already said it in previous posts and sections: you must be careful about Mocking and what you try to Mock. In another post, I mentioned the Mock Object as Test Double to assert on indirect outputs of the SUT. This can be useful if you can’t verify any other outside observable behavior or state of the SUT.

By using this Mock Object we can in fact verify the change of the SUT.

Discovery

Yes, we can verify the change of the SUT; but have we also a maintainable change of the Mock Object?
If we need to change some signature of the Mock Object, we need to alter the change throughout all the Mock Objects and there direct assertions to complete the change in signature.
If we mock out everything of the SUT and thereby Tight-Couple our Test Double with the SUT, we have Over-Specified Software by Mocking too much.

Causes

Multiple situations can cause a SUT being Tight-Coupled to the DOC (Depend-On Component, in this case the Mock Object).

  • Writing Tests After implementation can cause this situation. If you have developed a Hard-to-Test SUT, you may have encounter a SUT that only can be exercised/tested (and so, verified) by injecting a Mock Object and assert on the indirect outputs of the SUT.
    Sometimes, the assert-phase of these tests aren’t the whole picture we want to test but only a fragment; a side-effect. By using this extra side-effect, we have made our test code Tight-Coupled on our SUT.
  • Testing Unnecessary Side-Effects can also cause Over-Specified Software. If we assert on things we don't necessary need in our test or things that do not add any extra certainty to our test case; we should remove those assertions. Testing on “extra” items doesn’t result in more robust software but rather in Over-Specified Software.

Impact

So, let’s say you’re in such a situation; what’s the impact in your development work?
Just like any software that is Tight-Coupled, you have the cost of maintenance. If you’re software is tested in a way that the slightest change in your SUT that doesn’t alter any behavior of the SUT result in a failed test; you could state that you have Over-Specified Software.
Any change you make is a hard one, which result that developers will make lesser changes. Lesser changes/refactorings/cleanup… will result in lesser quality of your software.

Possible Solution

People tend to forget the Dummy Object when they are writing tests. The purpose of the Dummy Object is to fulfil the SUT needs without actually doing anything for it. Passing “null” or empty strings are good examples, or objects that are empty (and maybe throw exceptions when they are called to ensure that they aren’t called during the exercise of the SUT).
Not everything needs to be a Mock Object. And just to be clear A Mock isn’t a Stub!

You’ll be amazed how many unnecessary data you write in your tests when you start removing all those Mock Objects and replace them with lighter objects like Dummy Objects.

Yes, Mock Objects are absolutely necessary for a complete developer toolset for testing; yes, sometimes Mock Objects are the only possible solution to verify indirect outcomes of the SUT; yes, sometimes we need to assert on indirect output calls directly…
But certainly, not always. Try using another Test Double first instead of directly using a Mock Object. Just like you’ll use an Inline Fixture first before moving to a Shared Fixture.

Besides the fact that you can change your Test Double, you could also look at WHAT you want to test and may come up with some refactorings in your SUT to verify the right state or behavior. The best solution to have code that is easy to test; is writing your tests first, which result immediately in testable code.

Asynchronous Code

Some small part about Asynchronous Code, because it's too much to talk about in such a small section.

The problem with async code, is that we don't always have the same context in which we can assert for the right outcome. Sometimes we use a Polling functionality to get the work done for example. This will (of course) result in Slow Tests, but sometimes we don't have control of the running process.

In the book xUnit Test Patterns we've seen that we can use a Humble Object which extracts the async code, so we can make sync calls in our test. In my previous post, I talked about a Spy and used a Wait Handle, to block the thread before we succeed the test; this can also be a solution (if it's implemented right; timeout, ...).

The xUnit framework (not the xUnit family!; .NET xUnit != xUnit Family) written for .NET, has support for async methods which makes sure that we can assert in the right task-context.

Conclusion

So many classes are already in play that are more difficult to test; that’s why my motto is to not make this pile of classes any bigger and to write in a Test-Driven way easy-to-read/easy-to-maintain code every day. Because every day can be that day where you must change a feature, add/modify/remove functionality, or anything else that include change.

Tests are there to help, not to slow you down. In fact, by writing tests you work more productively, efficiently, safer, rousted, …

So, don’t write any Hard-to-Test code but write code that grows incrementally from your tests.

Categories: Technology
Tags: Code Quality
written by: Stijn Moreels