Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Tuesday, November 18, 2014 4:04 PM

Glenn Colpaert by Glenn Colpaert

This blogpost will describe some weird behavior we've encountered while trying to promote an object to the context. We will take a look at the behavior during receiving and sending of a message.

This blogpost will describe some weird behavior we’ve recently encountered. One of the requirement during our project was that we would promote some values to the context and during development of the pipeline component we discovered the behavior as described below.

The Setup

To promote properties to the context you off course need a property schema. We created following property schema with one property called Customer.

Next to that we have a basic ‘Customer’ class defined that will contain our Customer Object.

Last but not least we have a pipeline component that creates a Customer object and writes that object to the context.

Please note that we’ve intentionally passed the entire customer object as parameter to the Write method. Like the intelligence shows us.

Next to that we have following Receive Location and Send Port created in our BizTalk environment.

Receive Pipeline

When we drop a file in our receive location with above setup and added our CustomerPromotor Pipeline Component to our receive pipeline we noticed following behavior. Our pipeline component executes and our pipeline component return the message successfully.

But the message does not enter the BizTalk MessageBox and the file remains in the drop folder as shown below.

While this message remains in the Inbound folder, the receive location will keep on picking up this message and will keep on trying to process this message unsuccessfully. Important to stress is that this behavior is not triggering any warning or error in the event log. So we were left in the dark when it came to this scenario.

Send Pipeline

So let’s see what kind of behavior will be triggered when we use our pipeline component in a send pipeline scenario.

Again our pipeline component executes and our pipeline component return the message successfully.

But this time we do get a notification that something is wrong and our Message gets suspended with following error:

In the event log we see following error appearing.
A message sent to adapter "FILE" on send port "spCustomerRequestResponse" with URI
"C:\Users\gcolpaert\Desktop\Drop\Out\%MessageID%.xml" is suspended.
Error details: 80004005
MessageId: {6E32D903-E862-4037-9737-FF96D25CDB77}
InstanceID: {EFC3BFEA-8C44-47C5-BE24-4707A76EA1E6}


We discovered this behavior by accident while trying to promote an XML serialized object to the context in an outbound scenario, but we forgot to implement our XML serialization.

The problem with the error we received from BizTalk was not clear and we had to figure it out by debugging our pipeline component. We then took this scenario to our receive location and discovered the problem with the pick-up of the file.

We solved this problem by serializing our customer object to XML string and add that string to the context property, like we planned in the first place. We know not many people will encounter this problem as we mostly promote strings or other base types to the context. The main goal of this blogpost was to intentionally trigger this behavior and get the information out there, because we did not locate any information on this behavior and error.

Hope you enjoyed it!


Glenn Colpaert

Posted on Monday, November 3, 2014 3:31 PM

Massimo Crippa by Massimo Crippa

Sentinet by Nevatech is a lightweight and scalable middleware infrastructure that manages heterogeneous SOA and API services and applications deployed on-premises, in the cloud, or in hybrid environment.

This week I'm in Barcelona attending the Microsoft TechEd so I stopped by at the Nevatech booth 63 and had a chat with Andrew Slivker, the CTO of Nevatech.
I want to share some interesting questions that popped up during our conversation about Nevatech’s Sentinet software product.

Q : What makes Sentinet stand out from other SOA Management offerings?
A: There are actually quite some differentiators. First of all, Sentinet is the only product in SOA Governance and API Management space that is built upon the full Microsoft stack. It is mostly beneficial to organizations that either entirely standardize on a Microsoft stack when it comes to build and operate their services and APIs, or those organizations that use a mix of Microsoft and non-Microsoft service applications and want to integrate them to the fullest extent possible.

Sentinet is also a unified on-premises, cloud and hybrid solution. It is the same product that customers can operate on-premises within their private data centers, in the cloud, or distribute parts of the product between cloud and on-premises in a hybrid scenario. At the same time, Sentinet can manage customer services and APIs that also can be either on-premises, in the cloud and in hybrid environments.

Another differentiator is that Sentinet is highly extensible through standard and simple Microsoft .NET interfaces. Customers can easily add their own specific service behaviors with minimum development effort and integrate these behaviors in the product run-time and User Interface.

Sentinet’s own User Interface is another differentiator where it provides easy to use, highly interactive and yet functionally powerful means to configure and operate complex SOA services and REST API solutions.


Q : In the virtualization context, could you shed some light on the Sentinet tracking mechanism and how big is its impact on the virtual service turnaround time?
A: Sentinet is designed to be lightweight and reliable, it processes messages with the minimum average overhead of 5 to 10ms per message, and all the monitoring and business messages tracking information is delivered to the Sentinet Repository asynchronously, which means that the business transactions are not slowed down at the expense of the Sentinet infrastructure.


Q: How does the APIs lifecycle management work and how a modification is applied at runtime?
A: The services are transitioned through multiple stages and modifications during their life-cycle. Sentinet users apply these changes using the Sentinet Administrative Console, a rich browser-based application. All changes are tracked for Auditing purposes. Changes that affect the actual runtime environments are delivered remotely to the Sentinet runtime components. For example, a Sentinet user can instruct remote Sentinet Node runtime to open new service endpoint and configure it with specific security policies, access control and monitoring attributes.


Q: How do you identify the new product features? From the customers feedback, from your R&D team, from a comparision with the competitors, or what else?
A: All of the above, where customer feedback is the most important source of information for us. We also rely on the past experience of our own engineers and architects who have built large-scale enterprise solutions that have processed billions of commercial transactions up to date.


Q: The Sentinet 4.0 release is coming. Could you tell us something about the new features you're going to release?
A: There are multiple areas where we extend Sentinet's capabilities. Some of them are full change audit and change notifications, registering and managing services starting from data schemas (we call it schema first services), graphical services and applications dependency tracking and impact analysis, and highly extensible graphically designed messages Processing Pipeline that will be shipped with a number of built-in message processing and transformation components. As always, we are making our messages Processing Pipeline to be highly extensible through incredibly easy to use .NET interfaces.

Thanks Andrew for answering. We wish you and the Nevatech team the best on the future releases.

For more information on Sentinet and the Codit offering around Sentinet, please visit our Sentinet pages.



Categories: Sentinet
written by: Massimo Crippa

Posted on Friday, October 24, 2014 6:00 PM

Sam Vanhoutte by Sam Vanhoutte

On October 21, I presented on Azure Hybrid Connections in the Channel 9 studios in Redmond for the AzureConf conference, the biggest online Azure conference in the world. This blog post is the story of my day.

Tuesday was a great day for me.  I had the chance to talk on AzureConf for the first time and it was really great. AzureConf is the biggest Microsoft Azure online conference and it was on for the 3rd time.  I’m really honored, being among this great line-up of speakers.  The conference was streamed from the Channel 9 studios on the Microsoft campus in Redmond and had several thousands of viewers (stats might be shared later on).  Scott Klein organized the conference this year and I really want to thank him and his team for the good organization and the chance we all got.

This blog post is the story of my day.

The preparation

Since I knew I had to present on Hybrid Connections, I immediately started planning for this talk.  I had never given this talk before (had presented Hybrid Connections as part of my session on, so there was a lot of preparation needed.   I used the existing presentation of TechEd as input and guideline, but added more specific content and details to it in order to position Hybrid Connections and compare it with Service Bus Relay and Virtual Networking.

I also had some specific questions and things I wanted to get clarified, and for that I could count on the help and guidance of Santosh Chandwani (PM on the hybrid connections team).  As always, I spent most of the time on my demo for which I used our Codit Integration Dashboard and moved it to the cloud, while the data and back end services were still on premises.  I also built a new mobile service and a universal app, my first time.  And to end, I exposes a managed API through Azure API Management.  


The day before conference, all speakers were invited for technical pre-checks in the Channel 9 studios.  It was great to see the famous studio and all the equipment that was used there.  You immediately felt the atmosphere was really nice over there.

We got to know the nice crew of people there and had to test our laptops for screen projection, network connectivity and sound.  That seemed to be very important as both Mike Martin and me had some screen resolution issues.  Scott also handed our shirts and we all went our own way to prepare for our talk, the day after.  

AzureConf day

October 20 started.  After final dry run of the demo, we drove to the studios at 6:45AM.  Tension was building, as we saw the twittersphere getting more active about this event.  People from all over the world were tuning in for the keynote of Scott Guthrie.

We settled ourselves in the speaker room and all watched Scott Guthrie detailing out a lot of nice announcements that can be found on the azure blog.

The live sessions

We were watching the sessions from the other speakers, either from the speaker room, or from the 'channel9 war room'.  I believe the content of the sessions was very good and showed a good variety of services that are available on the Microsoft Azure platform.  The live sessions are available on channel 9 as well.  So if you have missed a session, go ahead and watch it online.

  • Michael Collier: Michael talked about the resource manager in Azure.  Very interesting capabilities of a service that will definitely evolve over time.
  • Mike Martin: Mike had a nice session on one of the topics that is crucial for every company: backups.  He showed who the Azure platform offers features & services for this.
  • Sam Vanhoutte: I guess that's me.  Hybrid connections, web sites, mobile services, service bus relay & API management.  All in one session.
  • Seth Juarez: This was a great session on one of the newest services in Azure: Machine learning.  By combining humor and complex maths, he made the complex subject of Machine learning much more 'digestable'.
  • Rick G Garibay: Rick gave a session that was very similar to the sessions I gave on IoT at the UKCSUG, Cloudburst and WAZUG.  Positioning the investments of Microsoft around IoT and discussing Reykjavik and the concepts of cloud assisted communications.  Great to see the complex demo worked.  I can guarantee it's not easy.
  • Vishwas Lele: Vishwas showed tons of tools and concepts (of which I believe Docker and the Traffic Manager for SharePoint were really nice).
  • Chris Auld: Chris talked about DocumentDB, the new document database in Azure.  A real good explanation and demonstration of the backend for MSN and Onenote. 

Everything was recorded in the Channel 9 studios and here's a nice group picture of all live speakers with the Channel 9 crew.

The recorded sessions

And to add to the great live content, there are also a lot of recorded sessions available on Channel 9.  I would encourage you all to have a look and download those sessions to watch whenever you have the time as there's real great content out there.

It was a real honour and pleasure to be part of this group of great speakers.  And with this, I would like to thank Scott Klein for having me over, the great crew of Channel 9 and all speakers for the great time.



written by: Sam Vanhoutte

Posted on Wednesday, October 22, 2014 3:35 PM

Maxim Braekman by Maxim Braekman

Ever needed the actions performed by a pipeline component to be different according to the usage of the main or backup transport? This post explains how you can implement a simple verification, before actually performing any of the required changes.

In some occasions an interface requires the send ports to have a configured backup transport location, just in case the primary location is unreachable. These send ports could also be using a pipeline component which has to perform a couple of actions before allowing the file to be transmitted to its destination. Of course the configuration of these components can be modified per send port, but what if these actions also need to differ if the main location is unreachable and the backup transport is being used?

For the purpose of this post let’s say we will be moving a file from one location to another and we need to configure a backup location, just in case. An additional requirement is that the filename needs to contain some values present in the message body, but needs to differ depending on the main/backup location.
Before we start developing the custom pipeline component - needed because of the filename requirement - we need to find out how we can implement a check on what location is being used.

This can be done by setting up a receive and send port, of course with a configured backup transport location on the latter, and make sure the primary send location is unreachable to allow for the backup transport to kick in. Looking at the tracking this would give us something as shown below.

As you can see, current configuration allows the send port to retry 3 times before allowing the backup transport to be used. Once an attempt is being made to send to the backup location, the message is processed successfully.
Now let’s have a closer look at how we can identify what type of transport location is being used. Opening up the context properties of the first ‘transmission failure’-record shows us the existence of a context property named ‘BackupEndpointInfo’

As you can see, this property is containing a GUID referencing to the backup transport which can be used if the main location is unreachable. Now, what if we have a look at the context properties of the message when it is actually being sent over the backup transport?

The ‘BackupEndpointInfo’-property is still present, although it no longer contains any value, since a backup location cannot have another backup.
In order to have a complete overview of the existence/usage of this property, let’s create a send port which does not have a backup transport location configured, and is referring to a correct, reachable location. Send another message through this flow and look at the context properties of the message being processed by this new send port.

Note that these properties are sorted alphabetically and no ‘BackupEndpointInfo’-property is present.
So, this means the property is only available when the backup transport is configured and only contains a value if the send port is sending to the main transport location.

Implementation within custom pipeline component

Since we have figured out what property can be used to check the transport location used, this info can be used to implement our custom pipeline component.
The code can be found below, but to summarize, you have to attempt to retrieve the BackupEndpointInfo-property from the context. If the returned object is NULL, no backup transport has been configured. If the property is present, you can define based on the value if the actual backup transport is being executed or not.



Whenever you need to find out if a backup transport is configured and is being executed, check the context properties for the existence of the context property ‘BackupEndpointInfo’, for namespace ‘’.


Categories: BizTalk Pipelines
written by: Maxim Braekman

Posted on Monday, October 13, 2014 10:15 AM

Massimo Crippa by Massimo Crippa

In the extensibility series we explored how we extend Sentinet to add features to the product. In this post we will take a look at the Sentinet’s management API, a set of functions to interface the Sentinet's Repository.

The Management API

The Sentinet management API is available as REST or SOAP service. The main concept here is that every operation you can do using the Sentinet UI is available as an API.
You can then leverage those interfaces to write custom tools that fit your needs the best. A common example is a script to automate your version strategy or change the authorizations.


With the "mobile first, cloud first" mantra ticking in our minds, we want to make the hosted services usage information and statistics available to the customers outside the enterprise boundaries.
To achieve that we're going to virtualize a few Management API operations (e.g. usage report, node status) and to present them on an azure website using Azure Hybrid Connections.

Sentinet is natively integrated with Microsoft Azure. Other possible ways to bridge from cloud based application to your on-premise resource are to use the Azure Service Bus Relay or, for example, to configure the Sentinet Nodes in the hybrid deployment scenarios where some Nodes are deployed on-premise while others are in the cloud.

Virtualize the management API

The native management API - like any other API - can be virtualized to extend or modify its own operations/behavior. Let’s pick the GetTopServiceVersionsUsageReport operation as an example and virtualize it.

The GetTopServiceVersionsUsageReport service returns different statistic data based on the {REPORTTYPE} parameter. I want to simplify this operation to get rid of all the parameters except the number of services parameter {TOPCOUNT} for which the statistics needs to be fetched.

Let's see which Sentinet’s features I leveraged to compose the new virtualized service to fit our scenario:
- Virtualization (to expose only the operations we want and reduce the interface complexity).

- Protocol mediation (from https to http). Though https should be everywhere in case of APIs, we will use http for simplifying the scenario configuration.

- Authorization (from username/password to API keys). This is a two steps process where the first one is to set up the access control with an HTTP header validation.

Any management API calls has to be authenticated with a token that is obtained by calling the login method with username and password. The second step is to plug the SignInManager behavior that maps the API key to a valid authorization token.

Mobile first, cloud first

In order to target a variety of devices I prepared a simple ASP.NET MVC 5 Web Application. It uses Bootstrap 3 as its default CSS Framework and it's a matter of few clicks to publish to Azure websites.

You can then setup the Azure website to use Hybrid Connections that enables to build hybrid applications using the same application connection string /URI that you would normally use if these were hosted locally on your private network.
Hybrid connections will be activated only on the node machine where I hosted the virtualized API so only a specific subset of services will cross the enterprise's boundaries.

You can find the procedure to setup the Azure WebSite and HC at this link.
1. Create an Azure Website
2. Create a Hybrid Connection and a BizTalk Service
3. Install the on-premises Hybrid Connection Manager to complete the connection

Once completed the HC is displayed in the web site as connected.


To test our scenario access to the Azure WebSite at this address If the virtual machines I used for this demo are up and running you will see the number of transactions increasing every times you load the main page otherwise a mocked data will be displayed.

For testing the responsive design of the application and the compatibility with different browsers/devices a service like browserstack can be used.


In this post we saw how the Sentinet’s management API, like any other API, can be virtualized to the change the accessibility and security requirements. In the next post I will continue the extensibility series to discuss about the WIF extensibility point.



written by: Massimo Crippa