Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Wednesday, June 28, 2017 4:17 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 3 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.

Rethinking Integration - Nino Crudele

Nino Crudele was perfectly introduced as the "Brad Pitt" of integration. We will not comment on his looks, but rather focus on his ability to always bring something fresh and new to the stage!

Nino's message was that BizTalk Server has the ideal architecture for extensibility across all of its components. Nino described how he put a "Universal Framework" into each component of BizTalk. He did this to be able to improve the latency and throughput of certain BizTalk solutions, when needed and appropriate.

He also shared his view on how not every application is meant to fully exist in BizTalk Server alone. In certain situations BizTalk Server may only act as a proxy to something else. It's always important to choose the right technology for the job. As an integration expert it is important to keep up with technology and to know its capabilities, allowing for a best of breed solution in which each component fits a specific purpose e.g. Event Hubs, Redis, Service Bus, etc...

Nino did a good job delivering a very entertaining session and every attendee will forever remember "The Chicken Way".

Moving to Cloud-Native Integration - Richard Seroter

Richard Seroter presented the 2nd session of the day. He shared his views on moving to cloud-native thinking when building integration solutions. He started by comparing the traditional integration approach with the cloud-computing model we all know today. Throughout the session, Richard shared some interesting insights on how we should all consider a change in mindset and shift our solutions towards a cloud-native way of thinking.

“Built for scale, built for continuous change, built to tolerate failure”

Cloud-native solutions should be built “More Composable”. Think loose-coupling, building separate blocks that can be chained together in a dynamic fashion. This allows for targeted updates, without having to schedule downtime… so “More Always-On”. With a short demo, Richard showed how to build a loosely-coupled Logic App that consumed an Azure Function, which would be considered a dependency in the traditional sense. Then he deployed a change to the Azure Function - on-the-fly - to show us that this can be accomplished without scheduled downtime. Investing time in into the design and architecture aspect of your solution pays off, when this results in zero downtime deployments.

Next, he talked about adding “More Scalability” and “More Self-Service”. The cloud computing model excels in ease of use and makes it possible for citizen developers or ad-hoc integrators to take part in creating these solutions. This eliminates the need for a big team of integration specialists, but rather encourages a shift towards embedding these specialists in cross-functional teams.

In a fantastic demo, he showed us a nice Java app that provides a self-service experience on top of BizTalk Server. Leveraging the power of the new Management API (shipped with Feature Pack 1 for BizTalk 2016 Enterprise), he deployed a functioning messaging scenario in just a few clicks, without the need of ANY technical BizTalk knowledge. Richard then continued by stating that we should all embrace the modern resources and connectors provided by the cloud platform. Extend on premises integration with “More Endpoints” by using, for example, Logic-Apps to connect BizTalk to the cloud.

The last part focused on “More Automation”, where he did not only talk about automated build and automated deployment, but also recommended creating environments via automation to achieve the highest possible levels of consistency. In another short demo, Richard showed us how he automatically provisioned a ServiceBus instance and all related Azure resources from the Cloud Foundry Service Broker CLI.

Be sure to check out the recording of this session! It has some valuable insights for everyone involved in cloud integration!

Overcoming Challenges When Taking Your Logic App into Production - Stephen W Thomas

The third session of the day was presented by Stephen W Thomas, who gave us some insights into the challenges he faced during his first Logic Apps implementation at a customer.

He split up his session in three phases, starting with the decisions that had to be taken. After a short overview of the EDI scenario he was facing and going over the available options that were considered for the implementation, it was clear that Logic Apps was the winner due to several reasons. The timeline was pretty strict, and doing custom .NET development would have taken 10 times longer than using Logic Apps. The initial investment for BizTalk, combined with the limited presence of BizTalk development skills, made Logic Apps the logical choice in this case. However, if you already use EDI in BizTalk, it probably makes sense to keep doing so, since your investment is already there.

In the second phase, he reflected on the lessons learned during the project. The architecture had to be made with the rules of a serverless platform in mind. This included a 2-weekly release cadence, which could affect the current functionality, which in turn makes it important to check the release notes. Another thing to keep in mind is the (sometimes) unpredictable pricing: where every Action in Logic Apps costs money, in BizTalk you can just keep adding on expression shapes without worrying about additional cost.

In the last phase, he left us with some tips and tricks that he gained through experience with Logic Apps. "Don't be afraid to use JSON". Almost every new feature is introduced in code view first, so take advantage of it by learning to work with it. It's also good to know that a For-Each loop in Logic Apps runs concurrently by default, but luckily this behaviour can be changed to Sequential (in the code view).

BizTalk Server Deep Dive into Feature Pack 1 - Tord Glad Nordahl

Tord had a few announcements to make which were appreciated by the audience:

  • The BizTalk connector for Logic Apps, which was in preview before today, is now generally available (GA).
  • Microsoft IT publicly released the BizTalk Server Migration Tool, which they use internally for their own BizTalk migrations. This tool should help in migrating your environment towards BizTalk Server 2016.

Tord discussed the BizTalk Server 2016 Feature Pack 1 next.

With the new ALM features, it's possible to deploy BizTalk solutions to multiple environments from any repository supported by Visual Studio Team Services. Just like the BizTalk Deployment Framework (BTDF), it is also possible to have one central binding file with variables being replaced automatically to fit your specific target environment.
The Management API included in Feature Pack 1 enables you to do almost anything that is possible in the BizTalk Management Console. You can create your own tools based on the API. For example: end users can be provided with their own view on the BizTalk environment. The API even supports both XML and JSON.
Feature Pack 1 also includes a new PowerBI template, which comes with the added Analytics. The template should give you a good indication on the health of your environment(s). The PowerBI template can be changed or extended with everything you can see on the BizTalk Management Console, according to your specific needs.

Tord also discussed that the BizTalk team is working on several new things already, but he could not announce anything new at the moment. We are all very anxious to hear what will come in the next Feature Pack!

BizTalk Server Fast & Loud - Sandro Perreira

Fast and loud: a session about BizTalk performance optimizations. The key takeaway is that you need to tune your BizTalk environments, beyond a default installation, if you want to achieve really high throughput and low latency. Sandro pointed out that performance tuning must be done on three levels: SQL Server, BizTalk Server and hardware.

SQL Server is the heart of your BizTalk installation and the performance heavily depends on its health. The most critical aspect is that you need to ensure that the SQL Agent Jobs are up and running. The SQL agent jobs keep your MessageBox healthy and avoid that your DTA database gets flooded. Treat BizTalk databases as a black box: don't create your own maintenance plans, as they might jeopardize performance and you'll end up with unsupported databases. Besides that, he mentioned that you should avoid large databases and that it is always preferable to go with dedicated SQL resources for BizTalk.

Performance tuning on the BizTalk Server level is mostly done by tuning and configuring host instances. You should have a balanced strategy for assigning BizTalk artifacts to the appropriate hosts. A dedicated tracking host is a must-have in every BizTalk environment. Be aware that there are also configuration settings at host (instance) level, of which the polling interval setting provides the quickest performance win to reduce latency.

It's advised to take a look at all the surrounding hardware and software dependencies. Your network should provide high throughput, the virtualization layer must be optimized and disks should be separated and fast.

These recommendations are documented in the Codit best practices and it's also part of our BizTalk training offering.

BizTalk Health Check – What and How? - Saffieldin Ali

After all the technical and conceptual sessions, it is good to be reminded that existing BizTalk environments and solutions need to be monitored properly to assure a healthy BizTalk platform and maximize both reliability and performance proactively. Identifying threats and issues lower or even avoid downtime in case of a disaster.
Microsoft's Saffieldin Ali shared his own experience, including various quotes that he collected throughout the years.
When visiting and interviewing customers, Ali has a list of red flags which, without even examining the environments, indicate that BizTalk may not be as healthy as you would want it to be. Discovering that customers have their own procedures to do backups, a lack of documentation of a BizTalk environment or not having the latest updates installed can be a sign of bad configuration. Any of which can cause issues in the future, affect operations and disrupt business.
To detect these threats, Ali explained how you can use tools like BizTalk Health Monitor (BHM), Performance Analysis of Logs (PAL) and Microsoft Baseline Security Analyzer (MBSA). He also showed us that, in BHM, there are two modes: a monitoring mode, which should be used as a basic monitoring tool and secondly, a reporting tool on the health of a BizTalk environment.

Incorporating the use of these tools in your maintenance plan is definitely a best practice every BizTalk user should know about!

The Hitchhiker's Guide to Hybrid Connectivity - Dan Toomey

In the first session after the afternoon break, Dan Toomey presented the different types of hybrid connectivity that allow us to easily set-up secure connections between systems. 

The network based options being Azure Virtual Network (VNET), with integration for web and mobile apps and VNET with API Management. This has all the advantages of APIM, but with an added layer of security. The non-network based options are WCF Relay, Azure Relay Hybrid Connections and the  On-Premises Data Gateway.

The concept of WCF-Relay is based on a secured listener endpoint in the cloud, which is opened via an outbound connection from within a corporate network. Clients send messages via the listeners endpoint, without the receiving party having to make any changes to the corporate firewall.

WCF Relay, which has the advantage of being the cheapest option, works on the application layer, whereas Hybrid Connections (HC) work on the transport layer. HC rely on port forwarding and work cross-platform. It is set-up in Azure (Service Bus) and connects to the HC Manager which is installed on premises.

The On-Premises Data Gateway acts as a bridge between Azure PaaS and on premises resources, and works with connectors for Logic Apps, Power Apps, Flow & Power BI.

In the end, Dan went through some scenarios to illustrate which relay is the better fit for specific situations. Being a big fan of the Hybrid Connection, the Hybrid Connection was often the preferred solution.

Dan finally mentioned that he has a Pluralsight training that goes into this topic. Although a bit dated since it also discusses BizTalk Services, the other material is still relevant.

Unlocking Azure Hybrid Integration with BizTalk Server - Wagner Silveira

Why should we use BizTalk Server and Azure together? That is the question Wagner Silveira kicked off his talk with.

He then talked about the fact that, if you are working on a complex scenario, you may want to use BizTalk Server if there are multiple systems you wish to call on premises. If there are multiple cloud endpoints to interface with, you might want to base the solution on Azure components. The goal being to avoid creating a slingshot solution with multiple roundtrips between on premises and cloud.
Since most organizations still have on premises systems, they can use BizTalk Server to continually get value out of their investments, and to continue leveraging the experience which developers and support teams have acquired.

He went on to talk about the available options that are available to connect to Azure. Wagner gave an overview of these options, in which he discussed Service Bus, Azure WCF Relay, App Services, API Management and Logic Apps.
When discussing Service Bus for example, he talked about how Service Bus allows full content based routing and asynchronous messaging. The latter would allow you to overcome unreliable connectivity, allow for throttling into BizTalk Server and multicasting scenarios from BizTalk to multiple subscribers.

Next he spoke about WCF-Relay. He talked about some of the characteristics of this option, stating that it supports both inbound and outbound communication based on dynamic relay, which is optimized for XML and supports ACS and SAS Security. WCF-Relay also has REST-support, which can be used to expose REST-services as well. You can then use WCF-Relay to publish for either inbound or outbound communication. Outbound communication is generally allowed by default, inbound communication will require network changes. Finally, you can also define outbound headers to support custom authentication.

A couple of typical scenarios for inbound WCF-relay that Wagner gave as examples were: real-time communication, exposing legacy or bespoke systems and to minimize the surface area (no "swiss cheese" firewall).
Examples of outbound scenarios are: leveraging public API’s and shifting compute to the cloud (for batch jobs for example), which allows us to minimize the BizTalk infrastructure footprint.

Next up was the Logic Apps adapter for BizTalk Server. Scenarios for using this solution would include extending workflows into Azure (think of connecting BizTalk Server to SalesForce for example). Another example would be exposing on premise data to Logic Apps.
For flows from Logic Apps into BizTalk on the other hand, it allows for securing internal systems, pre-validating messages and leveraging on premises connectors to expose legacy/bespoke systems.

The main takeaway for this session is that you should get to know the tools available, understand the sweet spots and know what to avoid. Not only from a technology and functional point of view, but from a pricing perspective as well.

There are many ways to integrate… Mix, match, and experiment to find the balance!

From Zero to App in 45 minutes (using PowerApps + Flow) - Martin Abbott

It is hard to give an overview of the last session by Martin Abbot about PowerApps since Martin challenged the "demo gods", by making it a 40-minute demo, with only 3 slides. A challenging, but interesting session where Martin created a PowerApps app, using some entities in the Common Data Service. He then connected PowerApps to Microsoft Flow and created a custom connector to be consumed as well, demonstrating the power of the tools. As one of the "founding fathers" of the Global Integration Bootcamp, he also announced the date for the next #GIB2018 event: the event will occur on March 24th 2018


Thank you for reading our blog post, feel free to comment with your feedback. Keep coming back, since there will be more blogs post to summarize the event and to give you some recommendations on what to watch when the videos are out.


This blogpost was prepared by:

Pieter Vandenheede (BE)
Toon Vanhoutte (BE)
Jonathan Gurevich (NL) 
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Ricardo Marques (PT)
Paulo Mendonça (PT)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)

Categories: Community

Posted on Tuesday, June 27, 2017 8:25 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 2 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.

Microsoft IT: journey with Azure Logic Apps - Padma/Divya/Mayank Sharma

In this first session, Mayank Sharma and Divya Swarnkar talked us through Microsoft’s experience implementing their own integrations internally. We got a glimpse of their approach and the architecture of their solution.

Microsoft uses BizTalk Server and several Azure services like API Management, Azure Functions and Logic Apps, to support business processes internally.
They run several of their business processes on Microsoft technologies (the "eat your own dog food"-principle). Most of those business processes now run in Logic App workflows and Divya took the audience through some examples of the workflows and how they are composed.

Microsoft has built a generic architecture using Logic Apps and workflows. It is a great example of a decoupled workflow, which makes it very dynamic and extensible. It intensively uses the Integration Account artifact metadata feature.

They also explained how they achieve testing in production. They can, for example, route a percentage of traffic via a new route, and once they are comfortable with it, they switch over the remaining traffic. She however mentioned that they will be re-evaluating how they will continue to do this in the future, now that the Logic Apps drafts feature was announced.

For monitoring, Microsoft Operations Management Suite (MOMS) is used to provide a central, unified and consistent way to monitor the solution.

Divya gave some insights on their DR (disaster recovery) approach to achieve business continuity. They are using Logic Apps to keep their Integration Accounts in sync between active and passive regions. BizTalk server is still in use, but acts mostly as the proxy to multiple internal Line-of-Business applications. 

All in all, a session with some great first-hand experience, based on Microsoft using their own technology.
Microsoft IT will publish a white paper in July on this topic. A few Channel9 videos are also coming up, where they will share details about their implementation and experiences.

Azure Logic Apps - Advanced integration patterns - Jeff Hollan/Derek Li

Jeff Hollan and Derek Li are back again with yet another Logic Apps session. This time they are talking about the architecture behind Logic Apps. As usual, Jeff is keeping everyone awake with his viral enthusiasm!

A very nice session that explained that the Logic Apps architecture consists out of 3 parts:

The Logic Apps Designer is a TypeScript/React app. This contained app can run anywhere e.g.: Visual Studio, Azure portal, etc... The Logic Apps Designer uses OpenAPI (Swagger) to render inputs and outputs and generate the workflow definition. The workflow definition can be defined as being the JSON source code of the Logic App.

Secondly, there is the Logic App Runtime, which reads the workflow definition and breaks it down into a composition of tasks, each with its own dependencies. These tasks are distributed by the workflow orchestrator to workers which are spread out over any number of (virtual) machines. Depending on the worker - and its dependencies - tasks run in parallel to each other. e.g. a ForEach action which loops a 100 times might be executed on 100 different machines.

This setup makes sure any of the tasks get executed AT LEAST ONCE. Using retry policies and controllers, the Logic App Runtime does not depend on any single (virtual) machine. This architecture allows a resilient runtime, but also means there are some limitations.

And last, but not least, we have the Logic Apps Connectors, connecting all the magic together.
These are hosted and run separately from the Logic App or its worker. They are supported by the teams responsible for the connector. e.g. the Service Bus team is responsible for the Service Bus connectors. Each of them has their own peculiarities and limits, all described in the Microsoft documentation.

Derek Li then presented an interesting demo showing how exceptions can be handled in a workflow using scopes and the "RunAfter" property, which can be used to execute different actions if an exception occurs. He also explained how retry policies can be configured to determine how many times an action should retry. Finally, Jeff gave an overview of the workflow expressions and wrapped up the session explaining how expressions are evaluated inside-out.

Enterprise Integration with Logic Apps - Jon Fancey

Jon Fancey, Principal Program Manager at Microsoft, took us on a swift ride through some advanced challenges when doing Enterprise Integration with Logic Apps.

He started the session with an overview and a demo where he showed how easy it is to create a receiver and sender Logic App to leverage the new batch functionality. He announced that, soon, the batching features will be expanded with Batch Flush, Time-based batch-release trigger options and EDI batching.

Next, he talked about Integration Accounts and all of its components and features. He elaborated on the advanced tracking and mapping capabilities.
Jon showed us a map that used XSLT parameters and inline C# code processing. He passed a transcoding table into the map as a parameter and used C# to do a lookup/replace of certain values, without having to callback to a database for each record/node. Jon announced that the mapping engine will be enriched with BOM handling and the ability to specify alternate output formats like HTML or text instead of XML only.

The most amazing part of the session was when he discussed the tracking and monitoring capabilities. It’s as simple as enabling Azure Diagnostics on your Integration Account to have all your tracking data pumped into OMS. It’s also possible to enable property tracking on your Logic Apps. The Operations Management Suite (OMS) centralizes all your tracking and monitoring data.

Jon also showed us an early preview of some amazing new features that are being worked on. OMS will provide a nice cross-Logic App monitoring experience. Some of the key features being:

  • Overview page with Logic App run summary
  • Drilldown into nested Logic-App runs
  • Multi-select for bulk download/resubmit of your Logic App flows.
  • New query engine that will use the powerful Application Insights query language!

We’re extremely happy and excited about the efforts made by the product team. The new features shown and discussed here, provethat Microsoft truly listens to the demands of their customers and partners.

Bringing Logic Apps into DevOps with Visual Studio - Jeff Hollan/Kevin Lam

The last Microsoft session of Integrate 2017 was the second time Kevin Lam and Jeff Hollan got to shine together. The goal of their session was to enlighten us about how to use some of the tooling in Visual Studio for Logic Apps.

Kevin took to the stage first, starting with a small breakdown of the Visual Studio tools that are available:

  • The Logic Apps Designer is completely integrated in a Visual Studio "Resource Group Project".
  • You can use Cloud Explorer to view deployed Logic Apps
  • Tools to manage your XML and B2B artifacts are also available

The Visual Studio tools generate a Resource Group deployment template, which contains all resources required for deployment. These templates are used, behind the scenes, by the Azure Resource Manager (ARM). Apart from your Logic Apps this also includes auto-generated parameters, API connections (to for example Dropbox , Facebook, ...) and Integration Accounts. This file can be checked-in into Source Control, giving you the advantage of CI and CD if desired. The goal is to create the same experience in Visual Studio as in the Portal.

Jeff then started off by showing the Azure Resource Explorer. This is an ARM catalog of all the resources available in your Azure subscription.

Starting with ARM deployment templates might be a bit daunting at first, but by browsing through the Azure Quickstart Templates you can get a hang of it quickly. It's easy to create a single template and deploy that parameterized template to different environments. By using a few tricks like Service Principals to automatically get OAuth tokens and using the resourceId() function to get the resourceId of a freshly created resource, you are able to automate your deployment completely.

What's there & what's coming in BizTalk360 & ServiceBus360 - Saravana Kumar

On the tune of "Rocky", Saravana Kumar entered the stage to talk about the latest updates regarding BizTalk360 and ServiceBus360.

He started by explaining the standard features of BizTalk360 around operations, monitoring and analytics.
Since May 2011, 48 releases have been published of BizTalk360, adding 4 or 5 new features per release.

The latest release includes:

  • BizTalk Server License Calculator
  • Folder Location Monitoring for FILE, FTP/FTPS, SFTP
  • Queue Monitoring for IBM MQ
  • Email Templates
  • Throttling Monitoring

Important to note: BizTalk360 supports more and more cloud integration products like Service Bus and Logic Apps. What they want to achieve is having a single user interface to configure monitoring and alerting.

Similar to BizTalk360, with ServiceBus360, Kovai wants to simplify the operations, monitoring and analytics for Azure Service Bus.

Give your Bots connectivity, with Azure Logic Apps - Kent Weare

Kent Weare kicked off by explaining that the evolution towards cloud computing does not only result in lower costs and elastic scaling, but it provides a lot of opportunities to allow your business to scale. Take advantage of the rich Azure ecosystem, by automating insights, applying Machine Learning or introducing bots. He used an example of an energy generation shop, where bots help to increase competitiveness and the productivity of the field technicians.

Our workforce is changing! Bring insights to users, not the other way around.

The BOT Framework is part of the Cognitive Services offering and can leverage its various vision, speech, language, knowledge and search features. Besides that, the Language Understanding Intelligence Service (LUIS) ensures your bot can smoothly interact with humans. LUIS is used to determine the intent of a user and to discover the entity on which the intent acts. This is done by creating a model, that is used by the chat bot. After several iterations of training the model, you can really give your applications a human "face".

Kent showed us two impressive demos with examples of leveraging the Bot Framework, in which both Microsoft Teams and Skype were used to interact with the end users. All backend requests went through Azure API Management, which invoked Logic Apps reaching out to multiple backend systems: SAP, ServiceNow, MOC, SQL and QuadrigaCX. Definitely check out this session, when the videos are published!

Empowering the business using Logic Apps - Steef-Jan Wiggers

Previous sessions about Logic Apps mainly focused on the technical part and possibilities of Logic Apps.
Steef-Jan Wiggers took a step back and looked at the potential of Logic Apps from a customer perspective.

Logic Apps is becoming a worthy player in the IPaaS hemipshere. Microsoft started an entirely new product in 2015, which has matured to its current state. Still being improved upon on a weekly basis, it seems it is not yet considered as a a rock-solid integration platform.
Customers, but even Gartner in their Magic Quadrant, often make the mistake of comparing Logic Apps with the functionality that we are used to, with products like BizTalk Server. They are however totally different products. Logic Apps is still evolving and should be considered within a broader perspective, as it is intended to be used together with other Azure services.
As Logic Apps continues to mature, it is quickly becoming "enterprise integration"-ready.

Steef-Jan ended his session by telling us that Logic Apps is a flexible and easy way to deliver value at the speed of the business and will definitely become a centralized product in the IPaaS market.

Logic App continuous integration and deployment with Visual Studio Team Services - Johan Hedberg

In the last session before the afternoon break, Johan Hedberg outlined the scenario for a controlled build and release process for Logic Apps. He described a real-life use case, with 3 typical personas you encounter in many organizations. He stressed on the importance of having a streamlined approach and a shared team culture/vision. With the available ARM templates and Visual Studio Team Services (VSTS), you have all the necessary tools to setup continuous integration (CI) and continuous deployment (CD).  

The session was very hands-on and to the point. A build pipeline was shown, that prepared the necessary artifacts for deployment. Afterwards, the release process kicked off, deploying a Logic App, an Azure Function and adding maps and schemas to a shared Integration Account. Environment specific parameter files ensured deployments that are tailored for each specific environment. VSTS can cover the complete ALM story for your Logic Apps, including multiple release triggers, environment variables and approval steps. This was a very useful talk and demo, because ALM and governance of your Azure application is key if you want to deliver professional solutions.

Integration of Things. Why integration is key in IoT solutions? - Sam Vanhoutte

The penultimate session of the day was held by our very own CTO Sam Vanhoutte. Sam focused his presentation in sharing some of the things Codit learned and experienced while working on IoT projects.

He started by stressing the importance of connectivity within IoT projects: "Connectivity is key" and "integration matters". Sam summarized the different connectivity types: direct connectivity, cloud gateways and field gateways and talked about each of their use cases and pitfalls.

Another important point of Sam's speech was in regard to the differences in IoT projects during Proof of Concepts (PoC) and an actual project implementation. During a PoC, it’s all about showing functionally, but in reality it is about focusing on robustness, security and connectivity.
Sam also approached the different responsibilities and activities regarding to gateways. He talked about the Nebulus IoT gateway and his ideas and experiences with it.

But IoT is not only about the cloud, Sam shared some insights on Azure IoT Edge as a Microsoft solution. Azure IoT Edge will be able to run within the devices own perimeter, but is not available yet or even in private preview. It can run on a variety of operating systems like Windows or Linux. Even on devices as small or even smaller than a Raspberry Pi. The session was concluded with the quote "Integration people make great IoT Solutions".

Be sure to check out our two IoT white-papers:

Also be sure to check out our IoT webinar, acccessible via the Codit YouTube channel.

IoT - Common patterns and practices - Mikael Hakansson

Mikael Hakansson started the presentation by introducing IoT Hub, Azure IoT Suite and what this represents in the integration world. The Azure IoT Hub enables bi-directional connectivity between devices and cloud, for millions of devices, allowing communication in a variety of patterns and with reliable command & control.

A typical IoT solution consists of a cold path, which is based on persistent data, and a hot path, where the data is analyzed on the fly. Since a year,  the device twin concept has been introduced in IoT Hub. A twin consists of tags, a desired state and a reported state; so really maintaining device state information (metadata, configurations, and conditions). 

Mikael Hakansson prepared some demos, where a thermometer and a thermostat were simulated. The demos began with a simulated thermometer with a changing temperature, while that information was being sent to Power BI, via IoT Hub and Stream Analytics. After that, an Azure Function was able to send back notifications to that device. To simulate the thermostat, a twin device with a desired state was used to control the temperature in the room. 


Thank you for reading our blog post, feel free to comment or give us feedback in person.

This blogpost was prepared by:

Pieter Vandenheede (BE)
Toon Vanhoutte (BE)
Jonathan Gurevich (NL) 
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Ricardo Marques (PT)
Paulo Mendonça (PT)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)

Categories: Community

Posted on Monday, June 26, 2017 7:18 PM

Integrate 2017 is the yearly conference for Microsoft integration. This is the day 1 recap of the sessions presented at Integrate with the views and opinions of the Codit staff.


Codit is back in London for Integrate 2017! This time with a record number of around 26 blue-shirted colleagues representing us. Obviously this makes sense now that Codit is bigger than ever with offices in Belgium, France, The Netherlands, UK, Switzerland, Portugal and Malta. This blog post was put together by each and everyone of our colleagues attending Integrate 2017.

Keynote: Microsoft Brings Intelligence to its Hybrid Integration Platform - Jim Harrer

What progress has Microsoft made in the Integration space (and their Hybrid Integration Platform) over the last year? How is Artificial Intelligence changing the way we think about enterprise application integration? Jim Harrer, Pro Integration Program Manager for Microsoft, kicks off with the keynote here at Integrate 2017. 

With a "year in review" slide, Jim reminded us how a lot of new Azure services are now in GA. Microsoft also confirmed, once again, that hybrid integration is the path forward for Microsoft. Integration nowadays is a "Better Together"-story. Hybrid integration bringing together BizTalk Server, Logic Apps, API Management, Service Bus, Azure Functions and … Artificial Intelligence.

Microsoft is moving at an incredible pace and isn't showing any signs of slowing down. Jim also spoke briefly about some of the great benefits which are now being seen since the Logic Apps, BizTalk, HIS and APIM fall under the same Pro-Integration team.  

Integration today is about making the impossible, possible; The fact that Microsoft is working very hard to bring developers the necessary tooling and development experience to make it easier and faster to deliver complex integration solutions. It's about keeping up - AT THE SPEED OF BUSINESS - to increase value and to unlock "the impossible".

Jim made a very good point:

Your business has stopped asking if you can do this or that, because it's always been a story about delivering something which takes months or will cost millions of dollars. Nowadays, you have the tools to deliver solutions at a fraction of the cost and a fraction of the time. Integration specialists should now go and ask business what they can do for them to maximize added value to that business and make your business as efficient as possible.

Jim had fewer slides in favor of some short, teasing demos:

  • Jeff Hollan demonstrated how to use Logic Apps with the Cognitive Services Face API to build a kiosk application to on-board new members at a fictitious gym ("Contoso Fitness"), adding the ability to enter the gym without needing to bring a card or fob but simply by using face recognition when entering the building.
  • Jon Fancey showed off some great new batching features which are going to be released for Logic Apps soon.
  • Tord Glad Nordahl tackled the scenario where the gyms sell products like energy bars and protein powders and needs to track sales and stock at all the locations, to determine when new products need to be ordered. BizTalk was the technology behind the scenes, with some Azure Machine learning thrown in.

Watch out for new integration updates later in the week to be announced.

Innovating BizTalk Server to bring more capabilities to the Enterprise customer - Tord Glad Nordahl

In the second session of the day, Tord walked us through the BizTalk lifecycle and emphasized that the product team is still putting a lot of effort in improving the product and its capabilities. He talked about the recent release of the first feature pack for BizTalk Server 2016 and how it tackles some of the pain points gathered from customer feedback. FP1 is just a first step in enriching BizTalk, more and more functionalities will be added and further improved in the time to come.  

"BizTalk is NOT dead"

Tord emphasized how important it is to receive feedback from partners and end-users. He urged everyone to report all bugs and inconviences using the Uservoice page so we can all together help shape the future of BizTalk Server.
The product team is working hard to release CU packs at a steady cadence, and plan on getting vNext of BizTalk ready before the end of 2018. 

No breaking news unfortunately (other than more features coming to the new automated deployment that came in Feature Pack 1), but we're looking forward to Tord's in-depth session about FP1 coming Wednesday. If you can't wait to have a look of what FP1 can do, check out Toon's blog posts!

BTS2016 FP1: Scheduling Capabilities
BTS2016 FP1: Continuous Deployment
BTS2016 FP1: Management & Operational API
BTS2016 FP1: Continuous Deployment Walkthrough

Messaging yesterday, today, and tomorrow - Dan Rosanova

The third speaker of the day was Dan Rosanova, giving us an overview of the evolution of the Messaging landscape and its future.

He started with some staggering numbers: currently Azure Messaging is processing 23 TRILLION (23,000,000,000,000,000,000) messages per month. Which is a giant increase from the 2.75 trillion per month last year (at Integrate).

In the past, picking a messaging system was comparable to choosing a partner to marry: you pick one you like and you're stuck with the whole package, peculiarities and all. It wasn't easy, and very expensive to change.

Messaging systems are now changing to more modular systems. From the giant pool of (Azure) offerings, you pick the services that best fit your entire solution. A single solution can now include multiple messaging products, depending on your (and their) specific use case.

"Event Hubs is the ideal service for telemetry ingestion from websites, apps and streams of big data."

Where Event Hubs used to be seen as an IoT service, this has now been repositioned as part of the Big Data stack. Although still on the edge with IoT.

The Microsoft messaging team has been very busy. Since last year they have implemented new Hybrid Connections, new java and open-source .NET clients, Premium Service Bus went GA in 19 regions and a new portal was created. They're currently working on more encryption (encryption at rest and Bring Your Own Key) and security: Managed Secure Identity and IP Filtering features which will be coming soon. So it looks to be a promising year!

Dan introduced Geo-DR, which is a dual-region active-passive disaster recovery tool coming this summer. The user decides when to trigger this fail-forward disaster recovery. However this is only meant as a disaster recovery solution, and is NOT intended for high-availability or other scenarios. 

Finally, Dan added a remark that messaging is under-appreciated and his goal is reaching transparent messaging by making messaging as simple as possible. 

Azure Event Hubs: the world’s most widely used telemetry service - Shubha Vijayasarathy

"The Azure Event Hubs are based on three S's: Simple, stable and Scalable.

Shubba talked about Azure Event Hubs Capture replacing the existing Azure Event Hubs Archive service. With Event Hubs Capture there is no overhead with code or configuration. The separate data transfer will reduce the service management hassle. It's possible to opt-in or -out at any time. Azure Event Hubs Capture will be GA June 28th 2017, price changes will go into effect August 1st 2017.

The next item was Event Hubs Auto-Inflate. With Auto-Inflate it's possible to auto-scale TU's, to meet your usage needs. It also prevents throttling (when data ingress and egress rates exceed preconfigured TUs). This is ideal for handling burst workloads. It's downside is that it only scales up and doesn’t scale back down again.
Dedicated Event Hubs are designed for massive scale usage scenarios. It has a completely dedicated platform, so there are no noisy neighbours sharing resources on Azure. Dedicated Event Hubs are sold in Capacity Units (CU). Message sizes are up to 1 MB.  

Event Hubs Clusters will enable you to create your own clusters in less than 2 hours in which Azure Event Hubs Capture is also included. Message sizes go up to 1MB and pricing starts at $5000. The idea is to start small and scale out as you go. Event Hubs Clusters is currently in private preview and will be available as public preview starting September 2017 in all regions.

Coming soon

- Geo-DR capability
- Encryption at rest
- Metrics in the new portal
- ADLS for public preview
- Dedicated EH clusters for private preview

Azure Logic Apps - build cloud-scale integrations faster - Jeff Hollan / Kevin Lam

Jeff Hollan and Kevin Lam had a really entertaining session which was perfect to avoid an after-lunch-dip! 

Some great new connectors were announced, which will be added in the near future. Among them: Azure storage tables, Oracle EBS, Service Now and SOAP. Besides the connectors that Microsoft will make available, the ability to create custom connectors, linked with custom API connections, sounds very promising!  It's great to hear that Logic Apps is now certified for Drummond AS2, ISO 27001, SCO (I, II, IIII), HIPAA and PCI DSS.

Quite a lot of interesting new features will be released soon:

  • Expression authoring and intellisense will improve the user experience, especially combined with detailed tracing of expression runtime executions.
  • Advanced scheduling capabilities will remove the need to reach out to Azure Scheduler.  
  • The development cycle will be enhanced by executing Logic Apps in draft, which means your Logic Apps can be developed without being activated in production and the ability to promote them.
  • The announced mock testing features will be a great addition to the framework.
  • Monitoring across Logic Apps through OMS and resubmitting from a failed action, will definitely make our cloud integration a lot easier to manage!
  • And last, but not least: out-of-the-box batching functionality will be released next week!

Azure Functions - Serverless compute in the cloud - Jeff Hollan

Whereas Logic Apps executes workflows based on events, Azure Functions executes code on event triggers. They really complement each other. It's important to understand that both are serverless technologies, which comes with the following advantages: reduced DevOps, more focus on business logic and faster time to market.

The Azure Functions product team has made a lot of investments to improve the developer experience. It is now possible to create Azure Functions locally in Visual Studio 2017, which gives developers the ability to use intellisense to test locally and to write unit tests.

There's out-of-the-box Application Insights monitoring for Azure Functions. This provides real details on how your Azure Functions are performing. Very powerful insights on that data are available by writing fairly simple queries. Jeff finished his session by emphasizing that Azure Functions can also run on IoT edge. As data has "gravity", some local processing on data is desired in many scenarios, to reduce network dependencies, cost and bandwith.

Integrating the last mile with Microsoft Flow - Derek Li

In the first session after the last break, Derek Li took us for a ride through Microsoft Flow, the solution to the "last mile" of integration challenges. Microsoft Flow helps non-developers work smarter by automating workflows across apps and services to provide value without code.

Derek explained why you should care about Flow, even if you're a developer and already familiar with Logic Apps: 

  • You can advise business users how they can solve some of their problems themselves using Flow, while you concentrate on more complex integrations.
  • You'll have more engaged customers and engaged customers are happy customers.
  • Integrations originally created in Flow can graduate to Logic Apps when they become popular, mission critical or they need to scale.
  • With the ability to create custom connectors you can connect to your own services.

Some key differences between Flow and Logic Apps:

Flow Logic Apps
Citizen-developers IT Professionals
Web & mobile interface Visual Studio or web interface
Access with Microsoft/O365 account Access with Azure Subscription
Ad-hoc Source control
Deep SharePoint integration  
Approval portal  

In short: Use Flow to automate personal tasks and get notifications, use Logic Apps if someone must be woken up in the middle of the night to fix a broken (mission-critical) workflow.

To extend the reach of your custom connectors beyond your own tenant subscription, you can publish your custom connector by performing the following steps:

  1. Develop custom connector within your Flow tenant, using swagger/postman
  2. Test using the custom connector test wizard
  3. Submit your connector to Microsoft for review and certification to provide support for the customer connector
  4. Publish to Flow, Power Apps, and Logic Apps

State of Azure API Management - Vladimir Vinogradsky

This session started with Vladimir pointing out the importance of API's, as API's are everywhere: IoT, Machine Learning, Software as a Service, cloud computing, blockchain... The need to tie all of these things together is what makes API Management a critical component in Azure: abstracting complexity and thereby forming a base for digital transformation.

Discover, mediate and publish are the keywords in API Management. For instance: existing backend services can be discovered using the API management development portal.

There is no strict versioning strategy in API Management as this depends on the specific organization The reason for this is that there is a lot of discussion on versioning of API's, with questions such as:

  • Is versioning a requirement?
  • When is a new version required?
  • What defines a breaking change?
  • Where to place versioning information? And in what format?

Microsoft chose an approach to versioning is fully featured. It allows the user full control on whether or not to implement it. The approach is based on the following principles:

  • Versioning is opt-in.
  • Choose the API versioning scheme that is appropriate for you.
  • Seamlessly create new API versions without impacting legacy versions.
  • Make developers aware of revisions and versions.

The session concluded with an overview of upcoming features for API Management:

Integrate heritage IBM systems using new cloud and on-premises connectors - Paul Larsen / Steve Melan

Last session of the day was all about integrating heritage IBM systems with Microsoft Azure technologies. It's interesting to know that still lots of organizations (small, medium and large) have some form of IBM systems running in their organization.

Microsoft developed a brand new Microsoft MQSeries client: extremely light weight, no more IBM binaries to be installed and outstanding performance improvements (up to 4 times faster). Thanks to this, the existing integration capabilities with old-school mainframes can now run in the Azure cloud as e.g. as Logic Apps connectors. An impressive demo was shown, showcasing cloud integration with legacy mainframe systems.

The story is even more compelling with the following improvements!


Thank you for reading our blog post, feel free to comment or give us feedback in person.

This blogpost was prepared by:

Pieter Vandenheede (BE)
Jonathan Gurevich (NL)
Toon Vanhoutte (BE)
Carlo Garcia-Mier (UK)
Jef Cools (BE)
Tom Burnip (UK)
Michel Pauwels (BE)
Pim Simons (NL)
Iemen Uyttenhove (BE)
Mariëtte Mak (NL)
Jasper Defesche (NL)
Robert Maes (BE)
Vincent Ter Maat (NL)
Henry Houdmont (BE)
René Bik (NL)
Bart Defoort (BE)
Peter Brouwer (NL)
Iain Quick (UK)
Ricardo Marques (PT)
Paulo Mendonça (PT)

Categories: Community

Posted on Friday, June 23, 2017 10:39 AM

Stijn Moreels by Stijn Moreels

As you may know, from previous blog posts, I use FAKE as build script tool to automate my compiling, testing, inspection, documentation and many other things. FAKE has already a lot of functionality in place, but they didn’t have any support for StyleCop. Until now.

StyleCop and F#

Why StyleCop?

StyleCop is a tool that analyzes your source files with the default Microsoft coding conventions describing how code should look like.

It’s not that I agree with every StyleCop rule, but there are more rules that I do agree on then otherwise.

FAKE already had support for FxCop, but since ReSharper had Intellisense support for StyleCop in place, I found it reasonable that my automated build (local and remote) is depending on an analyzing tool that I both can use in my development practice as in my automated build.

StyleCop Command Line

FAKE is written in F#, which means the whole .NET framework is available to us. Besides working with FAKE and some personal fun-projects, I didn’t have much experience with F# so it was a fun challenge.

StyleCop already has some command line tools that you can use to analyze your source files, so in theory my StyleCop implementation of F# could just use some of those tools and pass the right arguments with it?

F# Implementation

F# is a functional language with imperative features (such as mutable values, foreach…). My goal was to write a purely functional implementation of the command line tools that analyzed the source files.

I’m not going to run through every bit of the code, you can do that yourself with the link on the end of this post.

Before we get started, some practical numbers:

  • I wrote the implementation in 2-3 hours (so it certainly can be improved)
  • The single file contains around 100 lines of code

Just for fun, I checked the length of some C# implementation, and found that all together they had 650 lines of code.

Imperative to Functional

For Each For Each

One of the things I had to implement, was the possibility to analyze Source, Project and Solution Files. Source files could be analyzed directly; project files must first be decomposed in source files and solution files must first be decomposed in project files.

When I looked at the C# implementation, you could see that they had implemented a foreach, in a foreach, in a foreach, to get the three different levels of lists.
Each source file must be wrapped in a StyleCop Project object so it can be analyzed, so you must indeed go through every project file and solution file to obtain all those underlying source files.

Functional programming has different approaches. What we want to do is: “create a StyleCop Project for every source file”. That was my initial idea. I don’t want to know where those files came from (project, solution). I came up with this solution:

Every StyleCop Project must have an identifier which must be incremental by each source file it analyzes.

In my whole file, there are no foreach loops, but this function is called three times directly and time indirectly. So, you could say it’s the core of the file.

The function takes a start ID, together with the files to run through, to create StyleCop Project instances. It’s a nice example of the Tail Recursion in Functional Programming, where you let a function run through every element in a list (via the tail).
At the end, we combine all the projects to StyleCop Project instances and return that list.

Shorthanded Increment

In the C# implementation, they used the shorthanded increment (++) to assign the next integer as the Project Id. In the previous snippet, you see that the ID is also sent with the function and is increased internally. This way we can reuse this function because we could start from zero, but from any valid integer number. And that’s what I’ve done.

The source files can call this function directly, project files go through the source files and the solution files go through the project files but they all uses this function, the Tail Recursion. At the end, we combine all the StyleCop Projects created from the source files, project files and solution files.

I could have created a counter function that has a Closure inside to count the ID’s though:

This would have reduced the arguments we must send with the function, and would remove the implicit reference to the project ids and the project length.

Feel free to contribute!

C# Assemblies

The assemblies used in this file, are also written in C#. So, this is an example of how C# assemblies can be used in F# files without much effort. The bad side is that a “list” in C# isn’t the same as a “list” in F#, so some conversions are needed.

Also, Currying or Partial Functioning in F# isn’t possible with C# objects. If this was possible, I think my implementation would look just that bit more Functional than now.


Personally, I was surprised that I could write a full implementation in just 2 - 3 hours. F# is a fun language to play with, but also to write solid declarative implementations quickly. I hope that I can use my functional skills more in actual production projects.

My interest in functional programming has increased by implementing an implementation for StyleCop and I expect that not only my functional programming skills will become better in the future but also my object-oriented programming skills.

Thanks for reading and check the implementation on my GitHub.

Remember that I only worked for 2 - 3 hours; so, contribute to the FAKE library if you have any suggestions because EVERYTHING can be written better.

Categories: Technology
Tags: Code Quality, F#
written by: Stijn Moreels

Posted on Thursday, June 22, 2017 2:41 PM

Toon Vanhoutte by Toon Vanhoutte

In this blog post, I'll explain in depth the routing slip pattern and how you can leverage it within enterprise integration scenarios. As always I'll have a look at the benefits, but also the pitfalls will get some well-deserved attention.

The Pattern


A routing slip is a configuration that specifies a sequence of processing steps (services). This routing slip must be attached to the message to be processed. Each service (processing step) is designed to receive the message, perform its functionality (based on the configuration) and invoke the next service. In that way, a message gets processed sequentially by multiple services, without the need of a coordinating component. The schema below is taken from Enterprise Integration Patterns.

Some examples of this pattern are:

Routing Slip

Routing slips can be configured in any language, JSON or XML are quite popular. An example of a simple routing slip can be found below. The header contains the name of the routing slip and a counter that carries the current step number. Each service is represented by a routing step. A step has its own name to identify the service to be invoked and has a specific key-value configuration pairs.

Remark that this is just one way to represent a routing slip. Feel free to add your personal flavor…

Assign Routing Slip

There are multiple ways to assign a routing slip to a message. Let's have a look:

  • External: the source system already attaches the routing slip to the message
  • Static: when a message is received, a fixed routing slip is attached to it
  • Dynamic: when a message is received, a routing slip is attached, based on some business logic
  • Scheduled: the integration layer has routing slips scheduled that contain also a command to retrieve a message


A service is considered as a "step" within your routing slip. When defining a service, you need to design it to be generic. The executed logic within the service must be based on the configuration, if any is required. Ensure your service has a single responsibility and there's a clear boundary of its scope.

A service must consist of three steps:

  • Receive the message
  • Process the message, based on the routing slip configuration
  • Invoke the next service, based on the routing slip configuration

There are multiple ways to invoke services:

  • Synchronous: the next service is invoked without any persistence in between (e.g. in memory). This has the advantage that it will perform faster.
  • Asynchronous: the next service is invoked with persistence in between (e.g. a queue). This has the advantage that reliability increases, but performance degrades.

Think on the desired way to invoke services. If required, a combination of sync and async can be supported.


Encourages reuse

Integrations are composed of reusable and configurable building blocks. The routing slip pattern forces you to analyze, develop and operate in a streamlined manner. Reuse is heavily encouraged on different levels: the way analysis is performed, how patterns are implemented, the way releases are rolled out and how operational tasks are performed. One unified way of working, built on reusability.

Configuration based

Your integration is completely driven by the assigned routing slip. There are no hard-coded links between components. This allows you to change its behavior without the need of a re-deployment. This configuration also serves as a great source of documentation, as it explains exactly what message exchanges are running on your middleware and what they exactly do.

Faster release cycles

Once you have set up a solid routing slip framework, you can increase your release cadence. By leveraging your catalogue of reusable services, you heavily benefit from previous development efforts. The focus is only on the specifics of a new message exchange, which are mostly data bound (e.g. mapping). There's also a tremendous increase of agility, when it comes to small changes. Just update the routing slip configuration and it has an immediate effect on your production workload.

Technology independent

A routing slip is agnostic to the underlying technology stack. The way the routing slip is interpreted, is of course specific to the technology used. This introduces ways to have a unified integration solution, even if it is composed of several different technologies. It enables also cross technology message exchanges. As an example, you can have an order that is received via an AS2 Logic App, being transformed and sent to an on premise BizTalk Server that inserts it into the mainframe, all governed by a single routing slip config.

Provides visibility

A routing slip can introduce more visibility into the message exchanges, for sure from an operational perspective. If a message encounters an issue, operations personnel can immediately consult the routing slip to see where the message comes from, what steps are already executed and where it is heading to. This visibility can be improved, by updating the routing slip with some extra historical information, such as the service start and end time. Why even not including an URL in the routing slip that points to a wiki page or knowledge base about that interface type?


Not enough reusability

Not every integration project is well-suited to use the routing slip pattern. During analysis phase, it's important to identity the integration needs and to see if there are a lot of similarities between all message exchanges. When a high level of reusability is detected, the routing slip pattern might be a good fit. If all integrations are too heterogenous, you'll introduce more overhead than benefits.

Too complex logic

A common pitfall is adding too much complexity into the routing slip. Try to stick as much as possible to a sequential series of steps (services) that are executed. Some conditional decision logic inside a routing slip might be acceptable, but define clear boundaries for such logic. Do not start writing you own workflow engine, with its own workflow language. Keep the routing slip logic clean and simple, to stick to the purpose of a routing slip.

Limited control

In case of maintenance of the surrounding systems, you often need to stop a message flow. Let's take the scenario where you face the following requirement: "Do not send orders to SAP for the coming 2 hours". One option is to stop a message exchange at its source, e.g. stop receiving messages from an SFTP server. In case this is not accepted, as these orders are also sent to other systems that should not be impacted, things get more complicated. You can stop the generic service that sends a message to SAP, but then you also stop sending other message types… Think about this upfront!

Hard deployments

A very common pain-point of a high level of reuse, is the impact of upgrading a generic service that is used all over the place. There are different ways to reduce the risks of such upgrades, of which automated system testing is an important one. Within the routing slip, you can specify explicitly the version of a service you want to invoke. In that way, you can upgrade services gradually to the latest version, without the risk of a big bang deploy. Define a clear upgrade policy, to avoid that too many different versions of a service are running side-by-side.


A message exchange is spread across multiple loosely coupled service instances, which could impose a monitoring challenge. Many technologies offer great monitoring insights for a single service instance, but lack an overall view across multiple service instances. Introducing a correlation ID into your routing slip, can highly improve the monitoring experience. This ID can be generated the moment you initialize a routing slip.


Routing slips are a very powerful mechanism to deliver unified and robust integrations in a fast way. The main key take-aways of this blog are:

  • Analyze in depth if can benefit from the routing slip pattern
  • Limit the complexity that the routing slip resolves
  • Have explicit versioning of services inside the routing slip
  • Include a unique correlation ID into the routing slip
  • Add historical data to the routing slip

Hope this was a useful read!


Categories: Architecture
Tags: Design
written by: Toon Vanhoutte