wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Wednesday, October 22, 2014 3:35 PM

Maxim Braekman by Maxim Braekman

Ever needed the actions performed by a pipeline component to be different according to the usage of the main or backup transport? This post explains how you can implement a simple verification, before actually performing any of the required changes.

In some occasions an interface requires the send ports to have a configured backup transport location, just in case the primary location is unreachable. These send ports could also be using a pipeline component which has to perform a couple of actions before allowing the file to be transmitted to its destination. Of course the configuration of these components can be modified per send port, but what if these actions also need to differ if the main location is unreachable and the backup transport is being used?

For the purpose of this post let’s say we will be moving a file from one location to another and we need to configure a backup location, just in case. An additional requirement is that the filename needs to contain some values present in the message body, but needs to differ depending on the main/backup location.
Before we start developing the custom pipeline component - needed because of the filename requirement - we need to find out how we can implement a check on what location is being used.

This can be done by setting up a receive and send port, of course with a configured backup transport location on the latter, and make sure the primary send location is unreachable to allow for the backup transport to kick in. Looking at the tracking this would give us something as shown below.

As you can see, current configuration allows the send port to retry 3 times before allowing the backup transport to be used. Once an attempt is being made to send to the backup location, the message is processed successfully.
Now let’s have a closer look at how we can identify what type of transport location is being used. Opening up the context properties of the first ‘transmission failure’-record shows us the existence of a context property named ‘BackupEndpointInfo’

As you can see, this property is containing a GUID referencing to the backup transport which can be used if the main location is unreachable. Now, what if we have a look at the context properties of the message when it is actually being sent over the backup transport?

The ‘BackupEndpointInfo’-property is still present, although it no longer contains any value, since a backup location cannot have another backup.
In order to have a complete overview of the existence/usage of this property, let’s create a send port which does not have a backup transport location configured, and is referring to a correct, reachable location. Send another message through this flow and look at the context properties of the message being processed by this new send port.

Note that these properties are sorted alphabetically and no ‘BackupEndpointInfo’-property is present.
So, this means the property is only available when the backup transport is configured and only contains a value if the send port is sending to the main transport location.

Implementation within custom pipeline component

Since we have figured out what property can be used to check the transport location used, this info can be used to implement our custom pipeline component.
The code can be found below, but to summarize, you have to attempt to retrieve the BackupEndpointInfo-property from the context. If the returned object is NULL, no backup transport has been configured. If the property is present, you can define based on the value if the actual backup transport is being executed or not.

 

Conclusion

Whenever you need to find out if a backup transport is configured and is being executed, check the context properties for the existence of the context property ‘BackupEndpointInfo’, for namespace ‘http://schemas.microsoft.com/BizTalk/2003/system-properties’.

 

Categories: BizTalk Pipelines
written by: Maxim Braekman

Posted on Monday, October 13, 2014 10:15 AM

Massimo Crippa by Massimo Crippa

In the extensibility series we explored how we extend Sentinet to add features to the product. In this post we will take a look at the Sentinet’s management API, a set of functions to interface the Sentinet's Repository.

The Management API

The Sentinet management API is available as REST or SOAP service. The main concept here is that every operation you can do using the Sentinet UI is available as an API.
You can then leverage those interfaces to write custom tools that fit your needs the best. A common example is a script to automate your version strategy or change the authorizations.

Scenario

With the "mobile first, cloud first" mantra ticking in our minds, we want to make the hosted services usage information and statistics available to the customers outside the enterprise boundaries.
To achieve that we're going to virtualize a few Management API operations (e.g. usage report, node status) and to present them on an azure website using Azure Hybrid Connections.

Sentinet is natively integrated with Microsoft Azure. Other possible ways to bridge from cloud based application to your on-premise resource are to use the Azure Service Bus Relay or, for example, to configure the Sentinet Nodes in the hybrid deployment scenarios where some Nodes are deployed on-premise while others are in the cloud.

Virtualize the management API

The native management API - like any other API - can be virtualized to extend or modify its own operations/behavior. Let’s pick the GetTopServiceVersionsUsageReport operation as an example and virtualize it.

The GetTopServiceVersionsUsageReport service returns different statistic data based on the {REPORTTYPE} parameter. I want to simplify this operation to get rid of all the parameters except the number of services parameter {TOPCOUNT} for which the statistics needs to be fetched.

Let's see which Sentinet’s features I leveraged to compose the new virtualized service to fit our scenario:
- Virtualization (to expose only the operations we want and reduce the interface complexity).

- Protocol mediation (from https to http). Though https should be everywhere in case of APIs, we will use http for simplifying the scenario configuration.

- Authorization (from username/password to API keys). This is a two steps process where the first one is to set up the access control with an HTTP header validation.

Any management API calls has to be authenticated with a token that is obtained by calling the login method with username and password. The second step is to plug the SignInManager behavior that maps the API key to a valid authorization token.

Mobile first, cloud first

In order to target a variety of devices I prepared a simple ASP.NET MVC 5 Web Application. It uses Bootstrap 3 as its default CSS Framework and it's a matter of few clicks to publish to Azure websites.

You can then setup the Azure website to use Hybrid Connections that enables to build hybrid applications using the same application connection string /URI that you would normally use if these were hosted locally on your private network.
Hybrid connections will be activated only on the node machine where I hosted the virtualized API so only a specific subset of services will cross the enterprise's boundaries.

You can find the procedure to setup the Azure WebSite and HC at this link.
1. Create an Azure Website
2. Create a Hybrid Connection and a BizTalk Service
3. Install the on-premises Hybrid Connection Manager to complete the connection

Once completed the HC is displayed in the web site as connected.

Test

To test our scenario access to the Azure WebSite at this address http://sentinetdashboard.azurewebsites.net. If the virtual machines I used for this demo are up and running you will see the number of transactions increasing every times you load the main page otherwise a mocked data will be displayed.

For testing the responsive design of the application and the compatibility with different browsers/devices a service like browserstack can be used.

Conclusion

In this post we saw how the Sentinet’s management API, like any other API, can be virtualized to the change the accessibility and security requirements. In the next post I will continue the extensibility series to discuss about the WIF extensibility point.

Cheers,

Massimo

written by: Massimo Crippa

Posted on Friday, October 10, 2014 4:25 PM

Glenn Colpaert by Glenn Colpaert

Last Wednesday Glenn Colpaert was awarded ‘Integration MVP’ for the first time.
This blogpost is all about saying thank you and to give an overview on where it all started for Glenn.

Awarded first time integration MVP

Let’s start this blogpost by someone else telling my story from October 1st & October 2nd, I’m happy to quote Sam Vanhoutte directly from our internal sharepoint:
“Imagine this. You get nominated for MVP, you have filled out all the community things you have done over the past months (blog posts, organizing gwab, speaking at events, helping out on forums and twitter...) and then you know the day is come where all new MVP's get an e-mail in their mailbox.
And then you see other MVP's (old and new) tweeting how happy and honored they are with their (re)new(ed) title. And you congratulate them and F5 your own mailbox. And nothing happens, no new e-mails. You feel disappointed, but you know that you just have to continue and hope for the new quarter in a few months.
You settle with it and then you have to do some banking business. You expect an e-mail from that online banking system and you don't get it. You watch the spam folder and there you don't find the banking e-mail. No, you find following e-mail from the MVP program to let you know that you have been awarded the MVP title for the Integration competence. Oh the suspense!”

The day after

This award really came as a surprise for me, not because I discovered it way too late.
I believe this reward is more than what I currently deserve. But nevertheless it gives me that extra motivation to work even harder to make an impact on the current Integration community.

So, where did it all start?

I guess it all started about 5 years ago when I joined this amazing company. They gave me all the necessary opportunities to develop myself on a professional and personal level.
So actually a big thank you goes out to Codit for giving me all the support and possibilities that were necessary to achieve this award.
When it comes to community activities, I started being active in the community about 2 years. Blogging on a regular base and joining local User Groups and MeetUps.
This year I had the opportunity to help organize a GWAB location in cooperation with AZUG.be, they also gave me the chance to do my first community sessions. Couple of months later I did my first session for the local BizTalk User Group (BTUG).
I also continued blogging on a regular level and answering questions on the BizTalk MSDN forum.

What now?

Like I said in the start of this post, this award gives me that extra motivation to continue what I’m doing and actually kick it up a notch.
To end this post I would like to thank some people for giving me the insight and opportunities in this amazing community! (In no particular order): Sam Vanhoutte, Tom Kerkhove, Pieter Vandenheede, Mike Martin, Maarten Balliauw, Yves Goeleven, Kristof Rennen, BTUG.be and AZUG.

Cheers,
Glenn Colpaert

 

Categories: Community
written by: Glenn Colpaert

Posted on Wednesday, October 8, 2014 4:52 PM

Tom Kerkhove by Tom Kerkhove

Sam Vanhoutte by Sam Vanhoutte

2 weeks ago, a lot of great speakers who talked about the future of integration, SaaS, Mobility, and more during the Codit Integration Summit. Near the end of that day Sam Vanhoutte & I were on stage to show our demo.

In this blog post I will briefly walk you through our concept to illustrate what we've built and how we've used Codit Integration Cloud as an integration hub.

Data farming with Kinect for Windows & Twilio

During the coffee breaks attendees were able to complete a small survey about their vision on the Microsoft integration roadmap, mobility, SaaS and the question if they think that Codit has a special “NomNomNom”-calendar (Yes we really do have that :)). The fun part was that they had to use their body to answer the questions by using a Kinect for Windows v2 sensor by pushing buttons and moving sliders!

For those who were unable to attend but still wanted to complete the survey, Sam developed a Twilio application that allows them to call a certain Twilio number. This number would ask the same questions as the Kinect application where the callers were able to answer by using the keypad. Each time a question was answered Twilio called a custom API that was exposed by using Azure API Management and gave us the given answers. Sam illustrated this on stage by doing a Skype-call to show how it worked.

With both the Kinect & Twilio application receiving answers we decided to send everything to Codit Integration Cloud so the results could be processed.

Twilio Flow

Figure I – Survey flow of a Twilio call

What the attendees didn’t know was that the Kinect application was more than a simple Survey-application, thanks to Kinect for Windows we were able to generate some metadata regarding the attendee taking the survey. For starters, we took 10 pictures that were stored in Azure Blob Storage & local on the device so we knew who he/she was. Next to that we were able to track the users body which allowed us to assign a “unique” ID to this survey session. Last but not least, while the attendee was answering questions their face expressions & behaviour was also being tracked. This allowed us to do some face analytics indicating that the person was not really interested, he/she was smiling, how likely were they wearing glasses, etc.

The application also used the Kinect to count the amount of people in the scene which was sent to Codit Integration Cloud along with the other tracked data.

Note – the “unique” body ID is not 100% unique, if the user left the “scene” and came back in he/she would be tracked with a different ID.

Kinect flow

Figure II – Overview of the Kinect application

Codit Integration Cloud

Codit Integration Cloud is a flexible cloud platform of Codit where you can configure one or more receivers of a certain type, f.e. Azure Blob storage, as input endpoints. Next to receivers you can also configure one or more senders that are filtering on the inbound data and sending it, these also ship in several types. Everything that happens in Integration Cloud is automatically tracked you can monitor what is happening.

The platform allows us to easily expand our flow as the process changes – We used PowerPivot to monitor the data but we could also send push notifications to our phone by adding a Notification Hub sender. Codit Integration Cloud is mainly based on configuration instead of development which makes it easy to use but it is still extendible. You can create workflows and upload them so you can execute these before you send a message or when a new one come is.

In our demo we used Azure Blob Storage as receiver type that was polling on our Blob container, there we used a workflow to promote the application specific properties. We could also use Service Bus queues and add the properties to the message properties but we wanted to show the flexibility that you have.

With that said, let’s have a look at the data Twilio & Kinect applications are sending to integration cloud!

As you can see there are three different types of actions –

  • People-Detection indicates that the amount of tracked people in the scene has changed
  • Question-Answered resembles somebody answering a question where we clearly see the question & given answer
  • Expression-Analytics contains the analytics of the tracked person regarding their face expressions

Note that message coming in contains the Person ID - when available - which allows us to link it to the data.

Codit Integration Cloud - Tracking

Figure III – Codit Integration Cloud Tracking

In our demo we only used two clients – Twilio & Kinect – but imagine that your corporation wants to use these applications on all their events? That means that there could be hundreds of these applications running, farming and sending all the data at the same time? This is where you will also benefit from Codit Integration Cloud because it will aggregate all your data from the clients on the different events & booths. This allows us to use Codit Integration Cloud as an integration hub where we can do the routing decisions based on the Actions, Events, Booths, etc. from the context of the messages.

Your corporation is able to spin as many clients as it wants while our hub will handling the routing of the data to the destination we want.

Data is knowledge

In our demo Sam used Power Pivot to show how easily we could analyse the survey results based on the tracked information in Codit Integration Cloud and show the results to the attendees. This allows us to conclude that f.e. in 2015 80% of the attended will have more than 75% of SaaS in their corporation.

Power Pivot

Figure IV – Power Pivot survey results

But we could do more that survey analytics, there are many scenarios where your corporation could process the data. Let’s look at some other scenarios –

  • Improved Customer Support – While we currently only use the Kinect “Body ID” as reference to the user we could expand the scenario with RFID tracking in the Kinect scenario. This would allow us to map the Kinect ID to the name of the attendee and link his questions to that specific person. As an example, your corporation would know that person X is looking into mobility but needs a partner.
  • Internet of Things – Thanks to the ‘People-Detection’-action we know how many people there are on the scene so we could send a command to a device near the door of the booth.
    This would allow us to display a busy red LED when someone is taking the survey or change it back to green when someone may come in.
  • Mobile Notifications/SMS - Imagine that attendees could raise their hand when they have a question we could send a message to Codit Integration Cloud with a new action ‘Requesting-Assistance’. Based on the Event, Booth & Action we could then send a push notification to a client application or SMS one of your employees on-site that can help the attendee with his question.

But there are other scenarios as well where you can send all your data to Codit Integration Cloud and process it later on – For example Machine Learning where you send your data in Azure Blob Storage and you can perform machine learning on it to get the best of your data en perform predictions.

Data processing

Figure V – Data processing examples

We ended our demo by raffling two bottles of champagne to attendees of the survey, to do so we were able to use the pictures that were taken during their session.

Regarding any privacy concerns: rest assured, the data captured during the surveys was handled very strict and has been purged since.

Thanks for reading,

Tom

Categories: .NET Azure IoT

Posted on Tuesday, September 23, 2014 7:14 PM

Glenn Colpaert by Glenn Colpaert

Tom Kerkhove by Tom Kerkhove

Yesterday was the second time Codit organized an Integration Summit with different sessions on new technologies and testimonials from customers and community enthusiasts.
In this blogpost we will take a look back on what happened yesterday, so in case you missed it... happy reading!

Opening keynote: The Future of Integration (Richard Seroter)

 

In this amazing opening keynote Richard talked about the future of integration, what are the current trends and how to prepare for the future technologies?

The main question Richard tried to answer in his session is how the current trends change our typical XML application integration.
In our current integration enterprises it’s all about data volume, endpoints, technologies and destinations.
With the new trends like cloud computing, IOT, mobility, wearables and many other things we are introducing a whole new range of challenges to us integration specialists and companies.

Richard took us round all the current trends and tackled some of the more important implications on the integration industries.

It’s difficult to go through all of these trends and implications, so I would suggest to check out the slides of Richard when they are made available because they contain a very interesting overview of the current trends.
However I would like to give you the tips and suggestions which Richard gave us on how to prepare for this new revolution of trends.
First of all, BE ENGAGED! Get on Twitter and start following people, join conferences and meet-ups (even on technologies you never worked with) and most important of all share your knowledge with your co-workers.


GET EDUCATED! Learn the new products, protocols and new architectures by playing around and trying those hands-on. Never stop training yourself.
ENGINEER! Decompose current dependencies in your applications and integration solutions. Stay on the edge of technologies and give the latest technologies a try and most important of all. Try to automate as much processes as possible.

You can view his slidedeck here.

How to make everybody love SaaS (Sam Vanhoutte)

 

Sam Vanhoutte shared his vision & experience with the integration of SaaS in different architecture and exchange patterns which comes with certain challenges that have to be solved and how you could solve them going from external connectivity to security & identity to mobility.

He also showed us how easily you can add Salesforce to your infrastructure without creating a new Active Directory.

The key here is that each scenario will have their set of challenges depending on a lot of factors, for example: is it a Ground-to-Cloud, Cloud-to-Ground or Cloud-to-Cloud integration? It's just a matter of finding the right technology/service to bridge the gap.

 

Integration project tips & tricks (Toon Vanhoutte, Serge Verborgh and Danny Buysse)

 

This session consisted of three different parts.

In the first part Serge explained the attendees what methodology Codit is using when doing an integration project, this methodology is not only applicable to Codit integration project but can be applied to any integration project.
It’s all about identifying and tackling some common concerns in an integration project in an early phase.

The second part of this session was handled by Toon and was all about continuous integration and Application Lifecycle Management (ALM).
The key in a good ALM setup is all about the repository, automated testing and deployment and behavior testing. Always know what codebase is deployed where!
To round it all up Toon demonstrated how to easily setup ALM and automated deployment with the help of some tools Codit has developed.
After Toon, Danny took the stage to talk about performance management and detecting issues on your platform in an early stage. He talked about the differences between application monitoring and performance management monitoring which is actually a story about re-active vs pro-active operations.
To do this pro-active monitoring Codit is using AIMS for BizTalk, read more information on the AIMS product.

Mobility: it's not about the "if", but about the "how to"! (Rudy Van Hoe)

 

Rudy van Hoe walked us through the vision of Microsoft - what they have learned and why they made certain changes. He talked about the new Mobile first, cloud first model and how you can integrate mobility with your cloud IaaS/PaaS-infrastructure.


Look beyond the device and application, a testimonial (Hans Valcke).

 

 

In the testimonial session Hans took us on a trip round the infrastructure setup of the Mohawk Group (Unilin). He gave us some idea on how do they tackle the challenges in their integration and services infrastructure with over more than 100 BizTalk Servers and several hundred services.


One of the key features to the success of their integration setup and manageability of their services is Sentinet, Hans demonstrated how they use the product in their enterprise. Read more information on Sentinet.
Next to managing BizTalk and services the integration team also has to manage connectivity to certain mobile applications. The biggest challenge there according to Hans is the data and the provisioning of the data and application.
Last but not least Hans gave us some tips for a successful mobile strategy vision, here are the some of the key tips:
• Decouple your applications from your ERP
• Stimulate your development team to re-use existing services
• Monitoring and alerting is the key to keep your application in a healthy state
• Make mobile development abstract from base services development
• Operate in a secure way

 

 

Internet of Things – Hype or Reality? (Piet Vandaele)

 

In this session Piet gave us an overview on what IOT is and what it’s all about and how IOT leverages a number of other trends.
IOT is really all about embedded sensors connected to the internet and to each other and allows businesses and manufacturer to make better decisions on the moment they need it.
The reason why IOT is booming right now can be brought down to 2 simple reasons being: there is a whole new range of chipsets available that are more power efficient when it comes to connectivity and they are more affordable than before.

The maturity model of IOT consists of three different stages, first we have the basic information support (reading out meter details for example), then we have the remote operation support and last we have the remove performance improvement support.
Piet demonstrated this model by showing some real life cases and scenario’s.
The most question asked when it comes to IOT scenarios is whether to create an onsite, cloud or hybrid (cisco routers and switches) implementation.
According to Piet it’s not an OR story but more an AND story, preprocess it locally and store and execute logic in the cloud.

 

Win of the day: How to fit a full day Summit in one single demo? (Tom Kerkhove and Sam Vanhoutte)

 

 

Sam Vanhoutte & Tom Kerkhove had built a demo that covered most of the topics covered during the day and raffled two bottles of champagne to the Kinect Play Box. Keep an eye on the Codit Blog for a detailed post on the demo!

 

Keynote: Marc's Motivational Talk (Marc Herremans)

 

We had a full day of integration talks - which were very interesting - but we ended with a different kind of session: Marc Herremans joined us and told us about his life before and after his accident. He learned us - or at least me - that nothing can stop you from achieving your goals and you have to fight for the things you want to achieve and love, especially for your family as this is the most important thing in life.

This was a very interesting session that can't be written down but I'd like to summarize it with a quote of Marc - "Every setback is an opportunity to fight back."