Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Saturday, May 14, 2016 1:54 PM

Brecht Vancauwenberghe by Brecht Vancauwenberghe

Joachim De Roissart by Joachim De Roissart

Michel Pauwels by Michel Pauwels

Pieter Vandenheede by Pieter Vandenheede

Robert Maes by Robert Maes

The Codit recap of the third and last day at Integrate 2016.

Day 3 is already here and looking at the agenda, the main focus of today is IoT and Event Hubs.

Unfortunately we underestimated traffic today and spent quite a long time getting to the Excel, missing the first one and a half session. Luckily we were able to ask around and following Twitter while being stuck in traffic.

Azure IaaS for BizTalkers

Stephen W. Thomas identified the possibilities nowadays with Azure IaaS (Infrastructure as a Service). The offering there is getting bigger and bigger every day. Having lots of choice means that the right choice tends to be hard to make. 58 types of Virtual Machines makes one wonder what the right one would be. Luckily Stephen was able to guide us in the differences, identifiying which ones were the better choice for optimal performance, depending on your MSDN access level. The choices you make have an immediate effect on the monthly cost, so be aware!

Unified tracking on premise and in the cloud

M.R. Ashwin Prabhu was the next presenter. He took off with a very nice demo, showing how he showed how to analyze on premise BAM tracking data into a Power BI dashboard, something a lot of people might have already thought about, but never got to doing. We all know the BAM portal in BizTalk Server has not been updated since 2004 and it urgently needs an update. Customers are spoiled nowadays and, with reason, many of them don't like the looks of the BAM portal and get discouraged to use it.

Moving from BAM data to Logic Apps tracked data is not far-fetched, so another demo followed later demonstrating just this. Just minutes later he demonstrated combining BAM data and Logic Apps tracked data into one solution, with a very refreshing tracking portal in PowerBI, using the following architecture:

(Image by Olli Jääskeläinen)

IoT Cases

IoT is not the next big thing, it already is now! 

Steef-Jan Wiggers explained how a few years ago he managed a solution to process sensory data from windmills. Nowadays Azure components like Event Hubs - the unsung hero of the day -, Azure Stream Analytics, Azure storage, Azure Machine Learning and Power BI can make anything happen!

Kent Weare proved this in his presentation. He explained how, at his company, they implemented a hybrid architecture to move sensory data to Azure instead of on premise historians. Historians are capable of handling huge amounts of data and events, something which can also be done to Azure Event Hubs and Stream Analytics. Handling the output of this data was managed by Logic Apps, Azure Service Bus and partly by an on premise BizTalk Server. The decision to move to Azure were mostly based on the options around scalability and disaster recovery.
In short, a very nice presentation, making Azure services like Event Hubs, Stream Analytics, Service Bus, Logic Apps and BizTalk Server click together!

After the break Mikael Häkansson continued on the IoT train. He showed his experience and expertise by showing off two more demos: first he managed to do some live programming and debugging on-stage. He demonstrated how easy it can be to read sensory data from IoT devices, namely the Raspberry Pi 3. He talked about managing devices such as applications and updates to firm- and software.
In a last demo he showed us how he made an access control device using a camera with facial recognition software and

The last sessions

Next up was Tomasso Groenendijk, talking about his beloved Azure API Management. He showed off the capabilities, especially around security (delegation and SSL), custom domain names and such. 

A 3 day conference takes its toll: very interesting sessions, early mornings, long days and short nights. Just as our attention span was being put to the limits (nothing related to the speakers off course), Nick Hauenstein managed to spice things up, almost as much as the food we tasted the last couple of days ;-)

He managed to give us the most animated session of the conference without a doubt. Demonstrated by this picture alone:

(Picture by BizTalk360)

A lightning fast and interactive session, demonstrating his fictional company creating bobbleheads.He demonstrated a setup of Azure Logic Apps and Service Bus, acting and behaving like a BizTalk Server solution. Correlation, long running processes, most of the things we know from BizTalk Server.

Really nice demo and session, worth watching the video for!!

Due to the fact we needed to close up our commercial stand and had a strict deadline on our EuroStar departure with some high traffic in London, we missed Tom Canters session unfortunately. We'll make sure we check that video when they are released in a few weeks.

The conclusion

Integrate 2016 was packed with great sessions, enchanting speakers and great people from all over the world. We had a real great time, meeting people from different countries and companies. The mood was relaxed, even when talking to competitors! Props to the organizers of the event and the people working to make things go smooth at the Platinum Suites of the London Excel. It was great seeing all of these people again after last year, hearing everyones progress since last year's BizTalk Summit 2015.

We take with us the realisation that Azure is hotter than ever. Microsoft is still picking up its pace and new features get added weekly. Integration has never been so complex, but nowadays Microsoft has one - or even more - answers to each question. With such a diverse toolset, both on premise and in the cloud, it does remains hard to keep ahead. A lot of time is spent on keeping up-to-date with Microsoft's ever changing landscape.

We hope you liked our recaps of Integrate 2016, we sure spent a lot of time on it. Let us know what you liked or missed, we'll make sure to keep it in mind next year.

Thank you for reading!

Brecht, Robert, Joachim, Michel and Pieter




Posted on Friday, May 13, 2016 9:12 PM

Brecht Vancauwenberghe by Brecht Vancauwenberghe

Joachim De Roissart by Joachim De Roissart

Michel Pauwels by Michel Pauwels

Pieter Vandenheede by Pieter Vandenheede

Robert Maes by Robert Maes

The Codit recap of the second day at Integrate 2016.

Day 2 of the Integrate 2016 conference passed by. Please find our recap below.

Azure Functions & Azure App Service

Christopher Anderson kicked of the day with a talk on Azure Functions. He showed us some very impressive Azure App Service statistics, clearly showing usage of Azure services is gaining more popularity fast and therefore usage increases every day.

He demonstrated Azure Functions and how they can be utilized in combination with Logic Apps as well. Azure App Service is a cloud platform that enables rapid development of functions, or µ-services as you could call them, which can connect to anywhere. Azure Functions can be called from a Logic App as well, which allows fast and easy extensibility in various programming languages.

If you want to expose a simple µ-service, you should consider Azure Functions. When you are dealing with something more complex, use WebApps, as Azure Functions is designed to keep things simple.

Service Bus

The two sessions dealing with Service Bus were hosted by Dan Rosanova and Clemens Vaster. Two authorities within the integration field. Both also have a more-than-thorough BizTalk Server background!

Dan Rosanova started off looking back to his BizTalk Server days, all while admitting he still loves BizTalk Server. From his expertise he gave us some tips from his experience while transitioning from an on premise BizTalk Server platform to a cloud one. Talking about simplifying solutions as much as possible. Pushing logic to LOB systems, making them smarter, instead of implementing them in the integration layer and masking problems. Keeping messaging simple and fast and pushing any logic or transformations into the endpoints.

Clemens Vaster talked us through the roadmap of Service Bus: the upcoming standardization of Azure Service Bus on the AMQP protocol and - not on the slides, but only verbally communicated - the announcement that Azure Stack for on premise will include Event Hubs and Service Bus, although the latter will no longer be free.
Clemens went on about AMQP and it's capabilities and went into deep-dive detail how protocols like AMQP and MQTT continue to drive innovation. 

Dan also told us earlier, that, for large scale customers, an interesting new model for Event Hubs was created: Event Hubs Dedicated Capacity model. This allows them to have a dedicated cloud infrastructure, not a shared one like the current shared model. This allows them to keep on pushing their dedicated cloud resources to the limits, instead of potentially taking any performance hits due to the shared infrastructure architecture.

Open Source Messaging Landscape

Richard Seroter, a well-known integration mastermind and superhero (again, with BizTalk roots), talked about the open source messaging landscape. At the Integrate 2016 event, we are all Microsoft integration people and we all take what Microsoft does for granted as being top-notch. However, integration experts need to dare to see further than their own technology stack. Being able to stay ahead requires you to seen beyond your own boundaries. Richard is such an expert and showed us some examples of what is available in terms of messaging platforms, next to the Microsoft stack.

kafka, RabbitMQ and NATS were the chosen few. Richard demoed each one in several scenarios. Really impressive to see such lightweight messaging platforms performing very well on simpler hardware. Talking about the pros and cons of each one and comparing them to their Azure 'neighbor'.

Worth mentioning - and really impressive - was NATS in particular. A 3MB lightweight 'central nervous system', capable of handling millions of messages per second, with only minimal latency.

Last, but not least was the fact that Richard is very impressed with the work Microsoft is doing every day, driving Azure services further every day. He also had - by far - the best slidedeck!

HIS2016 announcements

Paul Larsen, Group PM Manager for the Host Integration Server (HIS) in Microsoft and Steve Melan, Integration MVP had some news regarding the upcoming release for HIS 2016.

What was already announced for HIS 2016 was the platform alignment for both Windows Server and IBM platforms, an improved installation experience and improvements regarding message and data integration among others.

Today, they surprised everyone with the announcement that HIS 2016 will contain brand new clients for MQ, DB2 and Informix, written in .NET, by Microsoft. According to their demo, you can expect a huge performance increase. Most likely because the new client does not use COM InterOp. Also, it offers better tracking and tracing.
Next to that, BizTalk Server CTP2 will contain new adapters for MQ, Informix and DB2 and also include better integration into the Visual Studio IDE!

Azure Service Fabric

Next up was our colleague and CTO Sam Vanhoutte.
Speaking from our own experience in the field, he gave the crowd some insights on when to use which part of the Azure Service Fabric ecosystem.

Microsoft has been using Service Fabric for a long time inside their data centers, and decided to make it more user friendly, so customers could use it for their own purposes. Applications moved from a monolithic approach, in which an application was a single 'package', with a deployment to each machine, to a µ-services-approach.
Splitting up functionality of a larger application in simpler µ-services allows much better reuse and 'pluggability' of these µ-services and also allows to scale really fast, faster deployments and better utilisation of Virtual Machines.

There are several ways of working with Azure Service Fabric, for which Sam provided us with a few tips when to use stateless or stateful services / actors. Towards the end of the session, Sam gave a few scenarios in which he used the various aspects of Azure Service Fabric in the field like the Codit IoT Field Gateway.

Integration Superheroes

Next up was Michael Stephenson, giving us a talk on how he sees the evolution from a specialized integration team - like most companies work today - to an approach where each team has at least one integration specialist (aka superhero). A necessary change as companies nowadays are not quite happy with how it works. 

BizTalk Server

The last 3 sessions of the day were focused on BizTalk Server. I have a feeling a lot of people in the room were anticipating these sessions and had some pretty high expectations.

Johan Hedberg brought us a real refreshing session about a typical project lifecycle, but instead of using Team Foundation Server (TFS), he showed us what was possible with Git. Especially noteworthy was the more relaxed branching strategy of Git in comparison to TFS and the tooling around it. Usage of Stash (now called Bitbucket Server) for Git flow management with Jenkins as a deployment automation management tool was really a great way to show off the capabilities around Git and also the Elastic Stack.

Sandro Perreira continued on the session from last years BizTalk Summit with another set of tips and tricks regarding BizTalk migrations.
The last session was Nino Crudele who, in his known, humours style, brought us some insights on how delicate a BizTalk assessment can be.


Also a big congratulations to Axon Olympus, which is part of Codit International. They managed to bring home the 'BizTalk360 Partner of the Year' award for the second time in a row! Well done!

Thank you for reading.

Brecht, Robert, Joachim, Michel and Pieter

Posted on Thursday, May 12, 2016 12:14 AM

Brecht Vancauwenberghe by Brecht Vancauwenberghe

Joachim De Roissart by Joachim De Roissart

Michel Pauwels by Michel Pauwels

Pieter Vandenheede by Pieter Vandenheede

Robert Maes by Robert Maes

The Codit recap of the first day at Integrate 2016.


Joined by our new UK colleagues today, Codit International was represented with no more than 4 countries at Integrate 2016. With a total of 18 members, we were the most represented company at the event!

Bigger and better than last London Summit, so let's summarize the key take-aways from today.

BizTalk + Logic Apps = A hybrid Vision

Microsoft admitted this morning that last year's vision was too cloud focused. Today they adjusted that vision to a hybrid one. One that moves from fully on premise solutions to a hybrid approach. Considering that most companies have only a fraction of their applications running in the cloud.

Their vision of hybrid integration becomes a marriage between Microsoft BizTalk Server and Azure Logic Apps. Customers can now connect both traditional on-premises applications to cloud-native applications seamlessly.

(Picture by Eldert Grootenboer)

What's new in BizTalk?

Additional to the earlier announced features, Microsoft revealed some extra news on the upcoming release of BizTalk Server 2016 RTM:

- A new Logic Apps adapter which enables integration between SaaS and on premise LOB applications.
- Some UI tweaks, which they labeled as a "Repaint" for a consistent look and feel. This however feels like it still needs some more polishing.

So all-in-all the tried and thoroughly tested, existing BizTalk engine remains the same. Fanc(e)y new features on that were not announced.

The planned roadmap for BizTalk Server 2016 RTM remains the same: 2016 Q4. A CTP2 is planned for 2016 Q3 (summer).

What's new in Logic Apps?

A lot more focus on the updates in Logic Apps today:

  • Logic Apps are coming to the Visual Studio IDE in the coming weeks!
  • It will be possible to reuse existing BizTalk Server .BTM mapping files into Logic Apps. There will be full support for BizTalk Server mapping functoids and custom code!
  • Continuous Integration: allowing to add Logic Apps to source control and have versioning, etc.… 
  • A new BizTalk connector which is the counterpart of the Logic Apps adapter for BizTalk Server.
  • Added extensibility allowing to run custom code.
  • New patterns: request/response, solicit/response and "fire and forget"
  • The addition of some new concepts: addition of scopes which allows nesting of components with individual compensation block (e.g. exception handling on each step). Coming in the next few weeks.
  • The addition of tracking and the possibility to debug. To enable you to pinpoint any Logic Apps instances, you can now track custom properties. Something else which is new is chain tracking which allows you to chain all tracking events in your instance.
  • Polling http endpoints is now a possibility!
  • Run logic apps on a schedule.

Also the introduction of a new concept: Enterprise Integration Pack with the Integration Account. This type of Logic Apps account receive the following capabilities:

  • Containers for centralized storage of schemas, transforms, partners and certificates. A predecessor for an upcoming trading partner management addition in the portal.
  • A VETER pipeline concept: A logic app can be designed to Validate, Extract, Transform, Enrich and Route. Adding possibilities to route to topics and add exception handling on any step.
  • X12 and AS/2 support will be added in the future. EDIFACT will be added further down the road, together with TPM and party resolution.

Integrations made simple with Microsoft Flow

At first sight Microsoft Flow looks and feels like IFTTT or Zapier: a recipe based tool to automate workflows across different apps and services. However, this service is based on Logic Apps and used the Logic Apps designer.
For example: integrating Twitter with Dropbox in a couple of clicks.

It might look like just another clone, but Microsoft Flow is much more advanced:

  • More advanced flow design: more branches, more steps
  • More connectors
  • You will be able to add your own custom APIs.
  • Debug the flow and trigger when you need it.
  • Flows can be shared to anyone.

Microsoft Flow brings a lot to the table, but consider the security concerns when handling integrations in a corporate environment. When using Flow, take into account the potential sensitive nature of the data you are working with.

Flow is free at the moment, but will possibly be licensed on a per-user basis. According to Microsoft: in a cheap license model.


A busy first day brings a lot of news and exciting new features. We are all looking forward to the next day of #Integrate2016.

Thank you for reading.

Brecht, Robert, Joachim, Michel and Pieter

Posted on Monday, May 2, 2016 5:34 PM

Korneel Vanhie by Korneel Vanhie

Due to the XslCompiledTransform library in BizTalk 2013, user exceptions thrown by Xsl Terminate are no longer visible in the application log.

Recently we received a comment from one of the students in our XSLT Training.

As shown in one of the examples he tried to throw a custom exception from an XSLT mapping and expected to find his error message in the application event log.

What he saw however was the following generic exception message:

The Messaging Engine failed while executing the inbound map for the message coming from source * with the Message * Details:"Exception has been thrown by the target of an invocation." 

Upon inquiring about the BizTalk version used (BizTalk 2013 R2), we had an inkling what might be the issue and tried to reproduce it, using the following XSLT:

After executing this map from a Send or Receive port we found the same generic exception in the event log:

The Messaging Engine failed while executing the inbound map for the message coming from source * with the Message * Details:"Exception has been thrown by the target of an invocation." 

When debugging the application and catching the exception, we did find the custom user exception message in the inner-exception.  

To validate this behavior was new in 2013 we reverted to the old mapping engine as described in this blogpost.

Sure enough, after failing another message, we got the following entry in the event log:

The Messaging Engine failed while executing the inbound map for the message coming from source URL: * with the Message Type *. Details:"Transform terminated: 'Error: Mapping Terminated from XSL"

Until recently we had to choose either to throw custom exception or use the compiled transform library, which is a shame. To receive readable exceptions from the mapping can greatly simplify the flow. 

Luckily, starting from BizTalk 2013R2 CU2, you can select the mapping engine on a per transformation basis (









Categories: BizTalk
written by: Korneel Vanhie

Posted on Thursday, April 28, 2016 3:17 PM

Toon Vanhoutte by Toon Vanhoutte

Jonathan Maes by Jonathan Maes

A real life example of how redis caching improved the performance of a large scale BizTalk messaging platform significantly.

With some colleagues of Codit, we’re working on a huge messaging platform between organizations, which is built on top of Microsoft BizTalk Server. One of the key features we must deliver is reliable messaging. Therefor we apply AS4 as a standardized messaging protocol. Read more about it here. We use the AS4 pull message exchange pattern to send the messages to the receiving organization. Within this pattern, the receiving party sends a request to the AS4 web service and the messaging platform returns the first available message from the organizations inbox.

Initial Setup

Store Messages

In order to support this pattern, the messages must be stored in a durable way. After some analysis and prototyping, we decided to use SQL Server for this message storage. With the FILESTREAM feature enabled, we are able to store the potential large message payloads on disk within one SQL transaction.

(1) The messages are stored in the SQL Server inbox table, using a BizTalk send port configured with the WCF-SQL adapter. The message metadata is saved in the table itself, the message payload gets stored on disk within the same transaction via FILESTREAM.

Retrieve Messages

As the BizTalk web service that is responsible for returning the messages will be used in high throughput scenarios, a design was created with only one pub/sub to the BizTalk MessageBox. This choice was made in order to reduce the web service latency and the load on the BizTalk database.

These are the two main steps:

(2) The request for a message is received and validated on the WCF receive port. The required properties are set to get the request published on the MessageBox and immediately returned to the send pipeline of the receive port. Read here how to achieve this.

(3) A database lookup with the extracted organization ID returns the message properties of the first available message. The message payload is streamed from disk into the send pipeline. This avoids that a potential large message gets published on the MessageBox. The message is returned via this way to the receiving party. In case there’s no message available in the inbox table, a warning is returned.

Potential Bottleneck

The pull pattern puts a lot of additional load on BizTalk, because many organizations (+100) will be pulling for new messages within regular time intervals (e.g. each 2 seconds). Each pull request is getting published on the BizTalk MessageBox, which causes extra overhead. As these pull requests will often result in a warning that indicates there’s no message in the inbox, we need to find a way to avoid overwhelming BizTalk with such requests.

Need for Caching

After some analysis, it became clear that caching is the way to go. Within the cache, we can keep track of the fact whether a certain organization has new messages in its inbox or not. In case there are no messages in the inbox, we need to find a way to bypass BizTalk and return immediately a warning. In case there are messages available in the organization’s inbox, we just continue the normal processing as described above. In order to select the right caching software, we listed the main requirements:

  • Distributed: there must be the ability to share the cache across multiple servers
  • Fast: the cache must provide fast response times to improve message throughput
  • Easy to use: preferably simple installation and configuration procedures
  • .NET compatible: we must be able to extend BizTalk to update and query the cache

It became clear that redis meets our requirements perfectly:

  • Distributed: it’s an out-of-process cache with support for master-slave replication
  • Fast: it’s an in-memory cache, which ensures fast response times
  • Easy to use: simple “next-next-next” installation and easy configuration
  • .NET compatible: there's a great .NET library that is used on Stack Overflow

Implement Caching

To ease the implementation and to be able to reuse connections to the cache we have created our own RedisCacheClient. This client has 2 connection strings: one to the master (write operations), and one to the client (read operations). You can find the full implementation on the Codit GitHub. The redis cache is implemented in a key/value way. The key contains the OrganizationId, the value contains a Boolean that indicates whether there are messages in the inbox or not. Implementing the cache, is done on three levels:

(A) In case a warning is returned that indicates there’s no message in the inbox, the cache gets updated to reflect the fact that there is no message available for that particular OrganizationId. The key/value pair gets also a time-to-live assigned.

(B) In case a message is placed on the queue for a specific organization, the cache gets updated to reflect the fact that there are messages available for that particular OrganizationId. This ensures that the key/value pair is updated as new messages arrive. This is faster than waiting for the time-to-live to expire.

(C) When a new request arrives, it is intercepted by a custom WCF IOperationInvoker. Within this WCF extensibility, the cache is queried with the OrganizationId. In case there are messages in the inbox, the IOperationInvoker behaves as a pass-through component. In case the inbox of the organization is empty, the IOperationInvoker bypasses the BizTalk engine and immediately returns the warning. This avoids the request to be published on the message box. Below there's the main part of the IOperationInvoker, make sure you check the complete implementation on Github.


After implementing this caching solution, we have seen a significant performance increase of our overall solution. Without caching, response times for requests on empty inboxes were on average 1,3 seconds for 150 concurrent users. With caching, response times decreased until an average of 200 ms.

Lessons Learned

Thanks to the good results, we introduced redis cache on other functionality in our solution. We use it for caching configuration data, routing information and validation information. During the implementation, we encountered some lessons learned:

  • Redis is a key/value cache, change your mindset to use it to the maximum.
  • Re-use connections to the cache, as this is the most costly operation.
  • Avoid serialization of cached objects.

Thanks for reading!
Jonathan & Toon

Categories: BizTalk, Performance