Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, October 3, 2016 3:48 PM

Pieter Vandenheede by Pieter Vandenheede

For BizTalk Server developers, given the choice between XSLT and the BizTalk mapper, opinions tend to change. This is my view on the purpose and value of both, trying to provide the ultimate answer: should I use XSLT or the mapper to create BizTalk maps?

Here at, we are always eager to try and do things as fast as possible and as efficient as possible. Lately, there has been several cases at clients/prospects, where I had to make a case for the "Codit" way of working and more specifically, how we make a BizTalk mapping... Some experienced BizTalk developers tend to look at me with an awful look of disgust, when I tell them we do not use the BizTalk mapper for mappings... at all. Yes, at Codit, we always use custom XSLT in our BizTalk mappings. And this blog post will try and explain why! First, let me talk to you about the difference between the BizTalk mapper and custom XSLT...

What is XSLT?

  • XSLT or Extensible Stylesheet Language Transformations is a language for transforming XML documents into other XML documents, HTML, plain text or xsl-fo.
  • XSLT is, in contrary to most programming languages, declarative instead of imperative. It's based on pattern matching. Rather than listing an imperative sequence of actions to perform, templates define how to handle a node matching a particular XPath-like pattern.
  • XSLT uses XPath.
  • XSLT was not made for BizTalk Server, it is BizTalk Server implementing XSLT 1.0. To this day it still remains on this version, including BizTalk Server 2016 CTP2! XSLT 2.0 has been out for a long while now, but BizTalk remains at 1.0, due to the fact that .NET does not offer support for XSLT 2.0. With XSLT 3.0 out last year, one might wonder if XSLT 2.0 support will ever come...

What is the BizTalk mapper?

  • The BizTalk mapper is a, very nifty, visualisation tool, created to visualize a mapping between a source and a target schema.
  • The BizTalk mapper is quite easy to use, especially since it uses a drag-and-drop mechanic to map one field to another.
  • Using functoids, a developer can loop/modify/adapt/select certain data before putting the result in the resulting output.
  • Using more than one cascading functoids, one can easily chain up these operations to allow a more complex mapping.
  • The BizTalk mapper generates XSLT 1.0!
  • The BizTalk mapper facilitates complex mappings, by using pages in the grid view

Let's compare!

Comparing one to the other is always hard, especially if you are in favor of one in particular. Let's try to be as objective as possible anyway... Let me know in the comments if you find otherwise!

Performance - winner: XSLT
Custom XSLT is - unless you are working with really easy maps - almost always better performing. The reasoning behind this is that the mapper will for - example - create too much variables for every substep. Any developer optimizing a mapping will see straight away that these might be optimized. The mapper is a tool which generates XSLT. For easier mappings, the XSLT will be as good as anyone can write it. The moment it gets more complex, you would be able to tweak the generated XSLT code to perform better.

(So far: XSLT 1 - 0 Mapper)

Ease of use - winner: Mapper
For some reason XSLT is something special. People tend to be afraid of it when they do not know it. As it happens, not many people tend to easily write XSLT, so there is a certain threshold to get over. For people already knowing XSLT, it flows naturally. The mapper is built to be intuitive and easy to use, for the untrained BizTalk professional and the seasoned BizTalk veteran. There are hundreds of scenarios you can tackle easily with it, only for some there is a need for a custom functoid or some custom XSLT.

(XSLT 1 - 1 Mapper)

Source Control - winner: XSLT
If you use a custom XSLT file, you need to add it to your solution and also to your source control. For every check-in you perform, you get a perfect version history: you can clearly see each and every byte of code that was changed, since it's just text, like any source code you write in .NET. The mapper is more complex for source control versioning. Your .btm file contains representations of the graphical links you made by dragging and dropping. It contains codes for every page, functoid, etc... and it's location on the grid. Updating a mapping can affect a whole lot more code than just your small change.

(XSLT 2 - 1 Mapper)

Maintainability - winner: draw
It might take some time to ‘dive’ into a mapping when working with XSLT. But the same can be said from the mapper.
Making small changes can be as easy as searching for the node(s) you need to change and updating the code.
It might take some time to ‘dive’ into a mapping when working with the mapper. Especially when working with multiple pages and complex links and functoids, in several cases it might even take longer. However, just like in XSLT, it depends how you structure your map.

(XSLT 3 - 2 Mapper)

NEW Interoperability  - winner: XSLT
XSLT can be run anywhere and there is support for it everywhere. Visual Studio, Notepad++ (XML Tools plugin), Altova, Eclipse, oXyGen, etc... It can be run on lots of editors, can be run from .NET/Java/etc... XSLT is a standard, here to stay, proven and tested. Be sure however, to keep yourself to XSLT 1.0! Try to avoid inline c# code or extension objects or your interoperability is also gone! Unfair competition for the mapper. The mapper is available in the BizTalk Developer Tools for Visual Studio. Your existing mappings will however be transferable to Logic Apps, with existing functoids. But this is nowhere near as interoperable compared to XSLT.

(XSLT 4 - 2 Mapper)

Debugging - winner: draw
XSLT can be debugged from within Visual Studio. Open your XSL file and click Debug. Easy. The mapper can be debugged, just like XSLT. You can step into functoids. Just as easy.

(Winner: XSLT 5 to 3)

This is how we (Codit) do it

At Codit, it is custom to do practically everything in custom XSLT. However, we are not ignorant of the mapper. It is a great tool and not using it for what it does best, would be such a waste. So this is our way of working:

  1. Create your mapping file (.btm) and select the source and target schemas.
  2. Link the fields you need in your specific mapping, using the BizTalk mapper, but do not use any functoids.
  3. Validate your mapping in Visual Studio, locate the XSLT and place it in a 'xsl' subfolder, using the same filename as your btm file.
  4. Assign the XSL file to your BTM file and make sure to delete all of the links in your grid view. This ensures any future developer looking at the code, that no mistakes can be made: it's all in the custom XSLT.
  5. Edit your custom XSLT and enjoy your freedom!

Some XSLT tips & tricks

Here are some additional tips a tricks I like to tell our developers which are starting off their integration career:

  • Use proper spacing in your XSLT! Empty lines between stylesheets, empty XML comments before and after a <xsl:for-each/> make your structures stand out so much more.
  • Use proper, clear and descriptive variable naming. It make such a difference.
  • Write and use comments for "future-you"! Don't give "future-you" any reason to hate "past-you", because you will regret that extra 5 minutes you neglected to spend on comments while you still 'got it'.
  • Don't do math in XSLT! Don't tell anyone, but it's not very good at it. Use extension objects or specific math functions.
  • Avoid inline C# code in your XSLT code at all costs. We have seen that inline C# code in your mapping may result in memory leaks if you call your mapping from a custom pipeline component for example.
  • Stylize the first line of your stylesheet. Put all namespaces on a separate line for example, for easier readability.


XSLT is the way to go! Although it does mean you need to invest in yourself. XSLT 1.0, XPath 1.0, etc... these are things you will need to learn. However, consider this is a good investment! Knowledge of XSLT can be used in several fields, from front-end design to PDF generation, it is something you will need at some point and it is very easy to learn! Also consider this: as a BizTalk / integration consultant: people using the mapper will not easily be able to handle an XSLT-file. People who know XSLT, can do both, since any BizTalk map can be converted to XSLT in a few seconds. Also this: whenever things get really complex, the developers which favor the mapper, still might need to copy/paste some of that custom XSLT in their scripting functoids to make their mapping work.

If you are interested in learning XSLT, please check the reference material provided at the end of this post. Also, be aware that Codit offers quite an extensive XSLT training, designed for integration products like BizTalk Server, Logic Apps, etc...

Please let me know if you have any remarks/comments. I'll be happy to elaborate further or to review some sections, given enough feedback.


For now:happy XSLT-ing!



Reference material

XSLT W3 Schools Tutorial -  


Note: this post also appeared on

Categories: BizTalk
written by: Pieter Vandenheede

Posted on Thursday, August 25, 2016 2:39 PM

Glenn Colpaert by Glenn Colpaert

We're currently working on a Service Fabric project where we have to implement a Service Fabric Actor that does work, based on a certain schedule or timeframe.
One of the challenges in this implementation was activating the actor immediately after deployment and registration.

This blogpost will explain on how this problem was handled.

If you are not familiar with Service Fabric Actors, please have a look at this in-depth description of the Actor Design Pattern and Service Fabric Reliable Actors:

Timers vs Reminders

Service Fabric Actors can schedule periodic work by registering either timers or reminders.

The main difference between timers and reminders is that reminders are triggered under all circumstances, until the actor unregisters the reminder or the actor is explicitly deleted.

Specifically, reminders are triggered across actor deactivations and failovers, because the Actors runtime persists information about the actor's reminders.

In-depth details of timers and reminders can be found here:

How did we do it?

The implementation starts with creating a Reliable Actor. When the Actor is being activated we register a reminder to do a certain job every 30 seconds. The actual job is being implemented in the ReceiveReminderAsync method.

In our case, the StartWork method is simply there to trigger the activation of the Actor. The activation of the actor is done inside an ActorService. This service is a simple derived service from the base ActorService.

The only difference here is that inside the run we create and active an instance of our ISchedulingActor and that way start the 30 seconds reminder.

The only thing we then have to change is the entry point of the service host process. In the main method instead of using the base ActorService to register our Actor.

We are using our own derived service SchedulingActorService to register the Actor and that way trigger the RunAsync (and therefor the creation and activation of our actor) on the SchedulingActorService.


Timers and reminders are a great way to schedule periodic work for your Service Fabric Actors. For our specific implementation it was important that the Service Fabric Actor was activated immediately after deployment.

Using the above implementation of creating a derived service based on the Actor Service is a good approach to achieve that goal with a small amount of effort in a 'static' Actor Scenario.

Keep in mind that the reminder or timer is re-initialized every time the primary replica of your actor service is started. This happens after failover, resource balancing, upgrade,… So make sure that your methods and actions performed by your Reliable Actor are idempotent.



Categories: Azure
written by: Glenn Colpaert

Posted on Friday, July 22, 2016 4:02 PM

In this post I will show you how you can restart a Service Fabric service from code

Currently I'm working on a Azure Service Fabric implementation. The official documentation contains a lot of valuable information, but it can take you quite some time to find the right piece of code you are looking for.

Our Service Fabric framework contains dynamically created Stateless services where during service startup our “configuration service” returns cached configuration.

When service configuration updates occur, it is necassary to restart instances of all the service. You can achieve this by using the Observer-pattern.

I was looking into how you can restart a service from code but it took me quite some time to figure out how to achieve that. By using the following code you can restart a Stateless or Statefull service from within the service itself:

You can find the original code on GitHub.

It is good to know that this only works on unsecured clusters - If you deploy this to a secure cluster, the service should run with elevated permissions.

Happy Service Fabric programming!


Categories: Azure

Posted on Friday, July 15, 2016 1:26 PM

Tom Kerkhove by Tom Kerkhove

Today I will talk about how we are currently using code reviews to build better solutions and how it gives me more confidence in the quality that we are shipping.

Working in teams brings several challenges, one of them is a mixture of coding styles, causing code inconsistency across your project which makes it hard to read or follow what it does.

The bigger the team, the more important it is to transfer your knowledge about what you've worked on, so you have a limited Bus-factor in your team.

Or have you ever been working on a new feature fully confident that it's ready to ship, only to notice that you've forgotten to take into account about the caching? Or that you've forgotten to update the documentation (if any)?

Sounds familiar to you?

Note - While some of these "pain points" can be tackled by using tools like Roslyn Analyzers or FxCop I prefer a more humane approach and discuss the why instead of the how.

Code reviews to the rescue

By using code reviews we can avoid these problems by collaborating before we ship the code - Let's first take a look at an example:

While this code could perfectly process your order, there are some issues:

  • Variable o represents a certain amount of state but what does it represent? Looking at the signature it is clearly an Order, but how do I know that in a 50-line method at the bottom?
  • Time zones, they are evil! What happens if this code runs in the U.S.?
  • Calling MyMethod<T> takes in a boolean but what does it really do and how does the boolean come in to play?
  • How does the caller know what the Process method does? Hopefully bill for the order? Also, it couldn't hurt to add additional documentation throughout the implementation in certain cases.

While performing a code review, the reviewer can indicate these concerns with the reviewee and have a polite & constructive discussion, backed by a set of coding guidelines. By doing this, both parties get to know how the other person thinks about it and they learn from each other. Yet, they also learn to express how they did something or what they have forgotten about.

Having a second pair of eyes on a certain topic can help a lot. Everybody has a different perspective and this can make sure that you forget about a certain topic and also, potentially, leads to interesting discussions. This gives you a certain "Don't worry, I've got your back" feeling and forces you to think deeper about what you've written.

At the end of the review, the reviewee can have some feedback to process, where after the code gets the seal of approval and is ready to ship.

Next to the code quality, you also perform small knowledge transfers with each other. You will not remember everything but when needed you will remember certain pieces that can help you guide to the potential bug or cause.

Last but not least is automated testing. It's a good thing to add the unit/behavior/scenario testing to your reviews as well because then the reviewer gets an indication of what you are testing and what you are NOT testing. Do the tests make sense or should the reviewee cover additional scenarios?


Using code reviews is of course not a free lunch and it comes with its own difficulties.

The biggest challenge is that your team members need to be open for feedback and willing to incorporate the feedback! If they are not up for it, you will just spend your valuable time to only notice that they are ignoring it. You, as a team, will need to decide whether or not you want to commit to the code-review-system.

Every review takes a decent amount of time, so incorporate that into your planning. The reviewer needs to go through it, discuss it with the reviewee and then the reviewee need to process the feedback. However, one might argue that it is better to take your time during development instead of having to spend twice that amount while fixing bugs or trying to understand what's going on.

New to code reviews? Here are some tips!

After using code reviews for a while, I've learned a couple of things on how to not do it or what can be challenging. Here are some tips that help you avoid some pitfalls.

Review early & frequently - The earlier you review, the better. This avoids reviewing something that is considered ready while you've misunderstood some aspects of it or have re-invented the wheel.

Define code guidelines - Agree upon a list of coding guidelines with your team to back your reviews. By doing this you have a clear list of styles, paradigms and DO's & DON'T DO's that the team should follow to unify your coding styles & practices. This makes reviewing a lot easier and a clear guidance on how it should look.

An example of this could be that each parameter should be checked for null and that it should throw an ArgumentNullException when appropriate.

Add code reviews to your definition-of-done - By adding code reviews to your definition of done you are certain that each new feature or bug fix has passed at least two pair of eyes and that multiple people agree on it.

By doing that, you also remove the burden of one person being responsible for one aspect, since it's the whole team that is responsible for it.

Don't review to bash, review to teach & improve - Finding a balance between being strict or agreeing with everything is hard. If you just bash it will have a negative impact on the team collaboration and frustrations will arise. Be constructive & open.

Review in-person but have a look at the changes in advance - This allows you to have a personal opinion instead of simply following the reviewee. This avoids you having to make decisions on the spot but instead digesting it first so you can think of obvious aspects

Challenge and be challenged - Ask questions about the topic to see if the reviewee has covered all the possible scenarios and learn about how they envision it. Discussions are a good thing, not a bad thing.

Learn from each other - Don't be afraid to say what you like and don't like or if you don't know about something. Learn from others and how he/she did it this way and not like how you thought about it.


While the internet has a wide variety of blogs talking about this "concept" I wanted to share my vision on this since I'm a big fan of this practice and believe that this really improves the quality. However, your success will depend on the cooperation of your colleagues and project management if they want to commit in doing so.

One thing is certain - I've used this on my current project, but will keep on doing so in the future.

Thanks for reading,


Categories: Architecture
Tags: ALM
written by: Tom Kerkhove

Posted on Thursday, July 7, 2016 11:58 AM

Luis Delgado by Luis Delgado

Discover how to unit test your node.js Azure functions on Azure, to increase code quality and productivity, using these code samples.

Writing unit and integration tests for Azure Functions is super critical to the development experience, since their execution relies on context variables and are beyond your control and supplied by the runtime. Furthermore, currently there is no local development or debugging experience available for Azure Functions. Therefore, testing if your functions behave properly, in the context of their runtime, is extremely critical to catch defects and increase your productivity.

Because Node.js is dynamically-typed, I want to share a quick trick on how to mimick the Azure Functions runtime context in order to test your functions. I did not find any documentation from Microsoft related to unit testing Node.js Azure Functions, so feel free to comment on the approach I propose here.

As an example, we are going to make a function that posts an observation every minute to Azure IoT Hub:


Now we want to write a unit/integration test for this function.


The function getContextObject simply returns an object the mimics the context object expected by the Azure Functions runtime. The test will simply import your function from index.js, create the mock-up context object and feed it to your function for execution. Finally, within your test, you can override the context.done() function to do the assertions you need and call done();

Is this the proper way to test Azure Functions on Node.js? I will let the Functions Product Group comment on that :). However, this method works for me.

The other alternative you have is to create your inside (internal) functions on other files that you can test separately in the traditional way you would test JS code, and import those files in your index.js file. The problem I see with that approach is, if your internal functions make a call to the context object, your tests will probably fail because of this.

Comments, feedback or suggestions? Submit an issue to the repository or write them below.

Categories: Azure
written by: Luis Delgado