Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Friday, October 24, 2014 6:00 PM

Sam Vanhoutte by Sam Vanhoutte

On October 21, I presented on Azure Hybrid Connections in the Channel 9 studios in Redmond for the AzureConf conference, the biggest online Azure conference in the world. This blog post is the story of my day.

Tuesday was a great day for me.  I had the chance to talk on AzureConf for the first time and it was really great. AzureConf is the biggest Microsoft Azure online conference and it was on for the 3rd time.  I’m really honored, being among this great line-up of speakers.  The conference was streamed from the Channel 9 studios on the Microsoft campus in Redmond and had several thousands of viewers (stats might be shared later on).  Scott Klein organized the conference this year and I really want to thank him and his team for the good organization and the chance we all got.

This blog post is the story of my day.

The preparation

Since I knew I had to present on Hybrid Connections, I immediately started planning for this talk.  I had never given this talk before (had presented Hybrid Connections as part of my session on, so there was a lot of preparation needed.   I used the existing presentation of TechEd as input and guideline, but added more specific content and details to it in order to position Hybrid Connections and compare it with Service Bus Relay and Virtual Networking.

I also had some specific questions and things I wanted to get clarified, and for that I could count on the help and guidance of Santosh Chandwani (PM on the hybrid connections team).  As always, I spent most of the time on my demo for which I used our Codit Integration Dashboard and moved it to the cloud, while the data and back end services were still on premises.  I also built a new mobile service and a universal app, my first time.  And to end, I exposes a managed API through Azure API Management.  


The day before conference, all speakers were invited for technical pre-checks in the Channel 9 studios.  It was great to see the famous studio and all the equipment that was used there.  You immediately felt the atmosphere was really nice over there.

We got to know the nice crew of people there and had to test our laptops for screen projection, network connectivity and sound.  That seemed to be very important as both Mike Martin and me had some screen resolution issues.  Scott also handed our shirts and we all went our own way to prepare for our talk, the day after.  

AzureConf day

October 20 started.  After final dry run of the demo, we drove to the studios at 6:45AM.  Tension was building, as we saw the twittersphere getting more active about this event.  People from all over the world were tuning in for the keynote of Scott Guthrie.

We settled ourselves in the speaker room and all watched Scott Guthrie detailing out a lot of nice announcements that can be found on the azure blog.

The live sessions

We were watching the sessions from the other speakers, either from the speaker room, or from the 'channel9 war room'.  I believe the content of the sessions was very good and showed a good variety of services that are available on the Microsoft Azure platform.  The live sessions are available on channel 9 as well.  So if you have missed a session, go ahead and watch it online.

  • Michael Collier: Michael talked about the resource manager in Azure.  Very interesting capabilities of a service that will definitely evolve over time.
  • Mike Martin: Mike had a nice session on one of the topics that is crucial for every company: backups.  He showed who the Azure platform offers features & services for this.
  • Sam Vanhoutte: I guess that's me.  Hybrid connections, web sites, mobile services, service bus relay & API management.  All in one session.
  • Seth Juarez: This was a great session on one of the newest services in Azure: Machine learning.  By combining humor and complex maths, he made the complex subject of Machine learning much more 'digestable'.
  • Rick G Garibay: Rick gave a session that was very similar to the sessions I gave on IoT at the UKCSUG, Cloudburst and WAZUG.  Positioning the investments of Microsoft around IoT and discussing Reykjavik and the concepts of cloud assisted communications.  Great to see the complex demo worked.  I can guarantee it's not easy.
  • Vishwas Lele: Vishwas showed tons of tools and concepts (of which I believe Docker and the Traffic Manager for SharePoint were really nice).
  • Chris Auld: Chris talked about DocumentDB, the new document database in Azure.  A real good explanation and demonstration of the backend for MSN and Onenote. 

Everything was recorded in the Channel 9 studios and here's a nice group picture of all live speakers with the Channel 9 crew.

The recorded sessions

And to add to the great live content, there are also a lot of recorded sessions available on Channel 9.  I would encourage you all to have a look and download those sessions to watch whenever you have the time as there's real great content out there.

It was a real honour and pleasure to be part of this group of great speakers.  And with this, I would like to thank Scott Klein for having me over, the great crew of Channel 9 and all speakers for the great time.



written by: Sam Vanhoutte

Posted on Wednesday, October 22, 2014 3:35 PM

Maxim Braekman by Maxim Braekman

Ever needed the actions performed by a pipeline component to be different according to the usage of the main or backup transport? This post explains how you can implement a simple verification, before actually performing any of the required changes.

In some occasions an interface requires the send ports to have a configured backup transport location, just in case the primary location is unreachable. These send ports could also be using a pipeline component which has to perform a couple of actions before allowing the file to be transmitted to its destination. Of course the configuration of these components can be modified per send port, but what if these actions also need to differ if the main location is unreachable and the backup transport is being used?

For the purpose of this post let’s say we will be moving a file from one location to another and we need to configure a backup location, just in case. An additional requirement is that the filename needs to contain some values present in the message body, but needs to differ depending on the main/backup location.
Before we start developing the custom pipeline component - needed because of the filename requirement - we need to find out how we can implement a check on what location is being used.

This can be done by setting up a receive and send port, of course with a configured backup transport location on the latter, and make sure the primary send location is unreachable to allow for the backup transport to kick in. Looking at the tracking this would give us something as shown below.

As you can see, current configuration allows the send port to retry 3 times before allowing the backup transport to be used. Once an attempt is being made to send to the backup location, the message is processed successfully.
Now let’s have a closer look at how we can identify what type of transport location is being used. Opening up the context properties of the first ‘transmission failure’-record shows us the existence of a context property named ‘BackupEndpointInfo’

As you can see, this property is containing a GUID referencing to the backup transport which can be used if the main location is unreachable. Now, what if we have a look at the context properties of the message when it is actually being sent over the backup transport?

The ‘BackupEndpointInfo’-property is still present, although it no longer contains any value, since a backup location cannot have another backup.
In order to have a complete overview of the existence/usage of this property, let’s create a send port which does not have a backup transport location configured, and is referring to a correct, reachable location. Send another message through this flow and look at the context properties of the message being processed by this new send port.

Note that these properties are sorted alphabetically and no ‘BackupEndpointInfo’-property is present.
So, this means the property is only available when the backup transport is configured and only contains a value if the send port is sending to the main transport location.

Implementation within custom pipeline component

Since we have figured out what property can be used to check the transport location used, this info can be used to implement our custom pipeline component.
The code can be found below, but to summarize, you have to attempt to retrieve the BackupEndpointInfo-property from the context. If the returned object is NULL, no backup transport has been configured. If the property is present, you can define based on the value if the actual backup transport is being executed or not.



Whenever you need to find out if a backup transport is configured and is being executed, check the context properties for the existence of the context property ‘BackupEndpointInfo’, for namespace ‘’.


Categories: BizTalk Pipelines
written by: Maxim Braekman

Posted on Monday, October 13, 2014 10:15 AM

Massimo Crippa by Massimo Crippa

In the extensibility series we explored how we extend Sentinet to add features to the product. In this post we will take a look at the Sentinet’s management API, a set of functions to interface the Sentinet's Repository.

The Management API

The Sentinet management API is available as REST or SOAP service. The main concept here is that every operation you can do using the Sentinet UI is available as an API.
You can then leverage those interfaces to write custom tools that fit your needs the best. A common example is a script to automate your version strategy or change the authorizations.


With the "mobile first, cloud first" mantra ticking in our minds, we want to make the hosted services usage information and statistics available to the customers outside the enterprise boundaries.
To achieve that we're going to virtualize a few Management API operations (e.g. usage report, node status) and to present them on an azure website using Azure Hybrid Connections.

Sentinet is natively integrated with Microsoft Azure. Other possible ways to bridge from cloud based application to your on-premise resource are to use the Azure Service Bus Relay or, for example, to configure the Sentinet Nodes in the hybrid deployment scenarios where some Nodes are deployed on-premise while others are in the cloud.

Virtualize the management API

The native management API - like any other API - can be virtualized to extend or modify its own operations/behavior. Let’s pick the GetTopServiceVersionsUsageReport operation as an example and virtualize it.

The GetTopServiceVersionsUsageReport service returns different statistic data based on the {REPORTTYPE} parameter. I want to simplify this operation to get rid of all the parameters except the number of services parameter {TOPCOUNT} for which the statistics needs to be fetched.

Let's see which Sentinet’s features I leveraged to compose the new virtualized service to fit our scenario:
- Virtualization (to expose only the operations we want and reduce the interface complexity).

- Protocol mediation (from https to http). Though https should be everywhere in case of APIs, we will use http for simplifying the scenario configuration.

- Authorization (from username/password to API keys). This is a two steps process where the first one is to set up the access control with an HTTP header validation.

Any management API calls has to be authenticated with a token that is obtained by calling the login method with username and password. The second step is to plug the SignInManager behavior that maps the API key to a valid authorization token.

Mobile first, cloud first

In order to target a variety of devices I prepared a simple ASP.NET MVC 5 Web Application. It uses Bootstrap 3 as its default CSS Framework and it's a matter of few clicks to publish to Azure websites.

You can then setup the Azure website to use Hybrid Connections that enables to build hybrid applications using the same application connection string /URI that you would normally use if these were hosted locally on your private network.
Hybrid connections will be activated only on the node machine where I hosted the virtualized API so only a specific subset of services will cross the enterprise's boundaries.

You can find the procedure to setup the Azure WebSite and HC at this link.
1. Create an Azure Website
2. Create a Hybrid Connection and a BizTalk Service
3. Install the on-premises Hybrid Connection Manager to complete the connection

Once completed the HC is displayed in the web site as connected.


To test our scenario access to the Azure WebSite at this address If the virtual machines I used for this demo are up and running you will see the number of transactions increasing every times you load the main page otherwise a mocked data will be displayed.

For testing the responsive design of the application and the compatibility with different browsers/devices a service like browserstack can be used.


In this post we saw how the Sentinet’s management API, like any other API, can be virtualized to the change the accessibility and security requirements. In the next post I will continue the extensibility series to discuss about the WIF extensibility point.



written by: Massimo Crippa

Posted on Friday, October 10, 2014 4:25 PM

Glenn Colpaert by Glenn Colpaert

Last Wednesday Glenn Colpaert was awarded ‘Integration MVP’ for the first time.
This blogpost is all about saying thank you and to give an overview on where it all started for Glenn.

Awarded first time integration MVP

Let’s start this blogpost by someone else telling my story from October 1st & October 2nd, I’m happy to quote Sam Vanhoutte directly from our internal sharepoint:
“Imagine this. You get nominated for MVP, you have filled out all the community things you have done over the past months (blog posts, organizing gwab, speaking at events, helping out on forums and twitter...) and then you know the day is come where all new MVP's get an e-mail in their mailbox.
And then you see other MVP's (old and new) tweeting how happy and honored they are with their (re)new(ed) title. And you congratulate them and F5 your own mailbox. And nothing happens, no new e-mails. You feel disappointed, but you know that you just have to continue and hope for the new quarter in a few months.
You settle with it and then you have to do some banking business. You expect an e-mail from that online banking system and you don't get it. You watch the spam folder and there you don't find the banking e-mail. No, you find following e-mail from the MVP program to let you know that you have been awarded the MVP title for the Integration competence. Oh the suspense!”

The day after

This award really came as a surprise for me, not because I discovered it way too late.
I believe this reward is more than what I currently deserve. But nevertheless it gives me that extra motivation to work even harder to make an impact on the current Integration community.

So, where did it all start?

I guess it all started about 5 years ago when I joined this amazing company. They gave me all the necessary opportunities to develop myself on a professional and personal level.
So actually a big thank you goes out to Codit for giving me all the support and possibilities that were necessary to achieve this award.
When it comes to community activities, I started being active in the community about 2 years. Blogging on a regular base and joining local User Groups and MeetUps.
This year I had the opportunity to help organize a GWAB location in cooperation with, they also gave me the chance to do my first community sessions. Couple of months later I did my first session for the local BizTalk User Group (BTUG).
I also continued blogging on a regular level and answering questions on the BizTalk MSDN forum.

What now?

Like I said in the start of this post, this award gives me that extra motivation to continue what I’m doing and actually kick it up a notch.
To end this post I would like to thank some people for giving me the insight and opportunities in this amazing community! (In no particular order): Sam Vanhoutte, Tom Kerkhove, Pieter Vandenheede, Mike Martin, Maarten Balliauw, Yves Goeleven, Kristof Rennen, and AZUG.

Glenn Colpaert


Categories: Community
written by: Glenn Colpaert

Posted on Wednesday, October 8, 2014 4:52 PM

Tom Kerkhove by Tom Kerkhove

Sam Vanhoutte by Sam Vanhoutte

2 weeks ago, a lot of great speakers who talked about the future of integration, SaaS, Mobility, and more during the Codit Integration Summit. Near the end of that day Sam Vanhoutte & I were on stage to show our demo.

In this blog post I will briefly walk you through our concept to illustrate what we've built and how we've used Codit Integration Cloud as an integration hub.

Data farming with Kinect for Windows & Twilio

During the coffee breaks attendees were able to complete a small survey about their vision on the Microsoft integration roadmap, mobility, SaaS and the question if they think that Codit has a special “NomNomNom”-calendar (Yes we really do have that :)). The fun part was that they had to use their body to answer the questions by using a Kinect for Windows v2 sensor by pushing buttons and moving sliders!

For those who were unable to attend but still wanted to complete the survey, Sam developed a Twilio application that allows them to call a certain Twilio number. This number would ask the same questions as the Kinect application where the callers were able to answer by using the keypad. Each time a question was answered Twilio called a custom API that was exposed by using Azure API Management and gave us the given answers. Sam illustrated this on stage by doing a Skype-call to show how it worked.

With both the Kinect & Twilio application receiving answers we decided to send everything to Codit Integration Cloud so the results could be processed.

Twilio Flow

Figure I – Survey flow of a Twilio call

What the attendees didn’t know was that the Kinect application was more than a simple Survey-application, thanks to Kinect for Windows we were able to generate some metadata regarding the attendee taking the survey. For starters, we took 10 pictures that were stored in Azure Blob Storage & local on the device so we knew who he/she was. Next to that we were able to track the users body which allowed us to assign a “unique” ID to this survey session. Last but not least, while the attendee was answering questions their face expressions & behaviour was also being tracked. This allowed us to do some face analytics indicating that the person was not really interested, he/she was smiling, how likely were they wearing glasses, etc.

The application also used the Kinect to count the amount of people in the scene which was sent to Codit Integration Cloud along with the other tracked data.

Note – the “unique” body ID is not 100% unique, if the user left the “scene” and came back in he/she would be tracked with a different ID.

Kinect flow

Figure II – Overview of the Kinect application

Codit Integration Cloud

Codit Integration Cloud is a flexible cloud platform of Codit where you can configure one or more receivers of a certain type, f.e. Azure Blob storage, as input endpoints. Next to receivers you can also configure one or more senders that are filtering on the inbound data and sending it, these also ship in several types. Everything that happens in Integration Cloud is automatically tracked you can monitor what is happening.

The platform allows us to easily expand our flow as the process changes – We used PowerPivot to monitor the data but we could also send push notifications to our phone by adding a Notification Hub sender. Codit Integration Cloud is mainly based on configuration instead of development which makes it easy to use but it is still extendible. You can create workflows and upload them so you can execute these before you send a message or when a new one come is.

In our demo we used Azure Blob Storage as receiver type that was polling on our Blob container, there we used a workflow to promote the application specific properties. We could also use Service Bus queues and add the properties to the message properties but we wanted to show the flexibility that you have.

With that said, let’s have a look at the data Twilio & Kinect applications are sending to integration cloud!

As you can see there are three different types of actions –

  • People-Detection indicates that the amount of tracked people in the scene has changed
  • Question-Answered resembles somebody answering a question where we clearly see the question & given answer
  • Expression-Analytics contains the analytics of the tracked person regarding their face expressions

Note that message coming in contains the Person ID - when available - which allows us to link it to the data.

Codit Integration Cloud - Tracking

Figure III – Codit Integration Cloud Tracking

In our demo we only used two clients – Twilio & Kinect – but imagine that your corporation wants to use these applications on all their events? That means that there could be hundreds of these applications running, farming and sending all the data at the same time? This is where you will also benefit from Codit Integration Cloud because it will aggregate all your data from the clients on the different events & booths. This allows us to use Codit Integration Cloud as an integration hub where we can do the routing decisions based on the Actions, Events, Booths, etc. from the context of the messages.

Your corporation is able to spin as many clients as it wants while our hub will handling the routing of the data to the destination we want.

Data is knowledge

In our demo Sam used Power Pivot to show how easily we could analyse the survey results based on the tracked information in Codit Integration Cloud and show the results to the attendees. This allows us to conclude that f.e. in 2015 80% of the attended will have more than 75% of SaaS in their corporation.

Power Pivot

Figure IV – Power Pivot survey results

But we could do more that survey analytics, there are many scenarios where your corporation could process the data. Let’s look at some other scenarios –

  • Improved Customer Support – While we currently only use the Kinect “Body ID” as reference to the user we could expand the scenario with RFID tracking in the Kinect scenario. This would allow us to map the Kinect ID to the name of the attendee and link his questions to that specific person. As an example, your corporation would know that person X is looking into mobility but needs a partner.
  • Internet of Things – Thanks to the ‘People-Detection’-action we know how many people there are on the scene so we could send a command to a device near the door of the booth.
    This would allow us to display a busy red LED when someone is taking the survey or change it back to green when someone may come in.
  • Mobile Notifications/SMS - Imagine that attendees could raise their hand when they have a question we could send a message to Codit Integration Cloud with a new action ‘Requesting-Assistance’. Based on the Event, Booth & Action we could then send a push notification to a client application or SMS one of your employees on-site that can help the attendee with his question.

But there are other scenarios as well where you can send all your data to Codit Integration Cloud and process it later on – For example Machine Learning where you send your data in Azure Blob Storage and you can perform machine learning on it to get the best of your data en perform predictions.

Data processing

Figure V – Data processing examples

We ended our demo by raffling two bottles of champagne to attendees of the survey, to do so we were able to use the pictures that were taken during their session.

Regarding any privacy concerns: rest assured, the data captured during the surveys was handled very strict and has been purged since.

Thanks for reading,


Categories: .NET Azure IoT