wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Wednesday, June 22, 2016 8:58 AM

Maxim Braekman by Maxim Braekman

Have you set up communication with a web service before and had to find a way to keep track of some information that was available in the request, but is no longer present in the response? Continue reading, since this post could help you sort out this problem.

Setting up communication with a web service can always turn out to be a bit tricky. You need to take care of configuring the certificate-settings, if required, configure the bindings to use the correct protocol, security and so on. But once all of these settings are correct and you start testing, now and again, depending on the scenario, you might notice you are losing some useful data across two-way communication, since some of the data which was available in the request, no longer appears to be in the response.
In such a case, one could opt to use an orchestration, although this is not the best solution, performance-wise.

An alternative way of storing those values is by creating a static class which will be storing the data based on the Interchange ID of the message. Since this static class needs to be told what data it needs to track, 2 custom pipeline components are needed. Why 2 components? You’ll need one to pass on the data from the message into the static class and another to retrieve those values from the class and pass them back onto the response message.

Yes, this can also be done by merging the components into a single one and using a property to indicate the direction, but for the sake of this post we will be using 2 separate components, just to keep everything clear.

Imagine a large system, such as AX, which is containing a whole bunch of data about several orders, but needs to retrieve some additional information - from another system - before processing these orders. Since these requests could be working asynchronously, in the AX point-of-view, the source-system will need some kind of ID to match the response to the initial request. In this case the request that is being sent towards BizTalk will be containing an orderID or requestID or any other form of identification, just to make sure each response is matched to the correct request.

Okay, so now this request, containing the ID, has arrived in BizTalk, but since the destination-system has no need for any ID from an external system, no xml-node will be provided for this ID, nor will it be returned within the response. In such a situation this issue becomes a “BizTalk-problem” to be resolved by the integration developer/architect.

This is when the use of the aforementioned static class comes in handy. Since the actual call to the destination-system is a synchronous action and there is no need for an orchestration to perform any additional actions, we can simply use the custom pipeline components to store/retrieve the original ID, assigned by the source-system.

The custom pipeline components

The static class might look as the example below, which allows BizTalk to save a complete list of context-properties for a specific interchangeID.

Next, you’ll be needing the pipeline components to actually access this class and allow for the data to be saved and restored when needed.

This post will not be zooming into the code of these pipeline components, but below is the general explanation of what these components are supposed to do.

SaveContextOverCallRequest

The first custom pipeline component will retrieve the InterchangeID from the context of the incoming message and use this as a unique ID to save a specified list of properties to the static class. This list could be scaled down by setting a specific namespace, which can be used to filter the available context properties. This would make sure only the properties from the specified namespace are being saved, preventing an overload of data being stored in memory.

SaveContextOverCallResponse

The second custom pipeline component will again retrieve the InterchangeID from the context of the incoming message, but this time it will use this value to retrieve the list of context-properties from the static class. Once the properties have been collected, there is no need for the static class to keep track of these values any longer, therefore it can remove these from its dictionary.

Next, you’ll be needing the pipeline components to actually access this class and allow for the data to be saved and restored once needed.

This post will not be zooming into the code of these pipeline components, but below is the general explanation of what these components are supposed to do.

Using the component

Once these components have been created, they will have to be added to the send/receive pipeline, depending on the type of component.

The send pipeline will contain the ‘SaveContextOverCallRequest’-component to make sure the required properties are being saved. The custom pipeline component should be the last component of this pipeline, since you want to make sure all of the property promotion is finished before the properties are being saved into the static class.

The receive pipeline will contain the ‘SaveContextOverCallResponse’-component, as this will be restoring the saved properties to the context. This should also be the first component in this pipeline, because we want the saved properties to be returned to the context of the message as soon as possible, to make sure these values are accessible for any further processing. Be aware that whether or not you are able to put this first, will laregely depend on your situation and transport protocol.

Example

To show the functionality of these components, a simple test-case has been set up, in which a request-message is being picked up from a file-location, a connection is made with a service and the response is sent back to a different folder. To give you an idea of the complete flow, the tracking data has been added here.

The request that will be used in this sample is a pretty simple xml-message, which can be seen below:

<ns0:GetData xmlns:ns0="http://Codit.Blog.Stub">
                <ns0:request>
                               <ns0:RequestDate>2016-06-15</ns0:RequestDate>
                               <ns0:RequestNumber>0002</ns0:RequestNumber>
                               <ns0:CustomerID>0001</ns0:CustomerID>
                               <ns0:Value>Codit offices</ns0:Value>
                </ns0:request>
</ns0:GetData>

              

 

 

 

 

 

As you can see, this request contains both a request- and customer-ID, which are the two ‘important’ values in this test-case. To make sure these properties are available in the context of the message, we made sure these are being promoted by the XML Disassembler, since the fields are indicated as promoted in the schema. Once the flow is triggered, we can have a look at the context properties and notice that the 2 values have been promoted.

The initial raw response that comes back from the service, which is the message before any pipeline processing has been performed, no longer contains these context-properties and neither does it contain these values in the body.

<GetDataResponse xmlns="http://Codit.Blog.Stub">
                <GetDataResult xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
                               <ResponseDate>2016-06-15T20:06:02.8208975+01:00</ResponseDate>
                               <Value>Ghent, Paris, Lisbon, Zurich, Hampshire</Value>
               </GetDataResult>
</GetDataResponse>

However, if we have another look at the context properties after the receive pipeline has done its job, we notice the properties are back in place and can be used for further processing/routing/....

Conclusion

Whenever you need to save a couple of values cross-call, there is an alternative solution to building an orchestration to keep track of these values.

Whenever you are building a flow, which will be saving a huge amount of data, you could of course build a solution which saves this data to disk/SQL/...,  but that is all up to you. 

Categories: .NET, BizTalk, Pipelines
written by: Maxim Braekman

Posted on Tuesday, May 31, 2016 8:36 PM

Pieter Vandenheede by Pieter Vandenheede

For a BizTalk automated deployment, I needed to automatically add machine.config WCF behaviorExtensions by using some C# code.

I've been playing around with BizTalk Deployment Framework lately and for one particular BizTalk application, I need to add 4 custom behaviorExtensions. I've had some very specific logic that I needed to put into some WCF Message Inspectors. When you think about automating the installation of a BizTalk application, you don't want to be manually adding the behaviorExtensions to the machine.config. So I set out to add these via a C# application. Seems this was not as trivial as I though it would be. First things first, we need to be able to retrieve the location of the machine.config file:

The above code, will give you the path of the machine.config file, depending on what runtime (x86 or x64) you are running the code under. When running as x86, you will get the following path:

"C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\machine.config"

When running as x64, you will get the following path:

"C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config"

Once the path has been found, we need to open the file and position ourselves at the correct location in the file (system.serviceModel/extensions):

Now this is the point where I initially got stuck. I had no idea I had to cast the Section as a System.ServiceModel.Configuration.ExtensionsSection. Doing so, allows you to add your behaviorExtension in the config file as such:

Don't forget to set the ForceSave, as - without it - it doesn't seem to write the update. All together, this gives you the following code:

If you do like me and you want to adapt the x86 AND the x64 machine.config file, just replace the "Framework" in the x86 machine.config path into "Framework64" before doing the same. I made myself a simple console application which allows me to call it while running the MSI for my BizTalk application. Only make sure the MSI runs as an administrator!

Categories: BizTalk, WCF
written by: Pieter Vandenheede

Posted on Saturday, May 14, 2016 1:54 PM

Brecht Vancauwenberghe by Brecht Vancauwenberghe

Joachim De Roissart by Joachim De Roissart

Michel Pauwels by Michel Pauwels

Pieter Vandenheede by Pieter Vandenheede

Robert Maes by Robert Maes

The Codit recap of the third and last day at Integrate 2016.

Day 3 is already here and looking at the agenda, the main focus of today is IoT and Event Hubs.

Unfortunately we underestimated traffic today and spent quite a long time getting to the Excel, missing the first one and a half session. Luckily we were able to ask around and following Twitter while being stuck in traffic.

Azure IaaS for BizTalkers

Stephen W. Thomas identified the possibilities nowadays with Azure IaaS (Infrastructure as a Service). The offering there is getting bigger and bigger every day. Having lots of choice means that the right choice tends to be hard to make. 58 types of Virtual Machines makes one wonder what the right one would be. Luckily Stephen was able to guide us in the differences, identifiying which ones were the better choice for optimal performance, depending on your MSDN access level. The choices you make have an immediate effect on the monthly cost, so be aware!

Unified tracking on premise and in the cloud

M.R. Ashwin Prabhu was the next presenter. He took off with a very nice demo, showing how he showed how to analyze on premise BAM tracking data into a Power BI dashboard, something a lot of people might have already thought about, but never got to doing. We all know the BAM portal in BizTalk Server has not been updated since 2004 and it urgently needs an update. Customers are spoiled nowadays and, with reason, many of them don't like the looks of the BAM portal and get discouraged to use it.

Moving from BAM data to Logic Apps tracked data is not far-fetched, so another demo followed later demonstrating just this. Just minutes later he demonstrated combining BAM data and Logic Apps tracked data into one solution, with a very refreshing tracking portal in PowerBI, using the following architecture:

(Image by Olli Jääskeläinen)

IoT Cases

IoT is not the next big thing, it already is now! 

Steef-Jan Wiggers explained how a few years ago he managed a solution to process sensory data from windmills. Nowadays Azure components like Event Hubs - the unsung hero of the day -, Azure Stream Analytics, Azure storage, Azure Machine Learning and Power BI can make anything happen!

Kent Weare proved this in his presentation. He explained how, at his company, they implemented a hybrid architecture to move sensory data to Azure instead of on premise historians. Historians are capable of handling huge amounts of data and events, something which can also be done to Azure Event Hubs and Stream Analytics. Handling the output of this data was managed by Logic Apps, Azure Service Bus and partly by an on premise BizTalk Server. The decision to move to Azure were mostly based on the options around scalability and disaster recovery.
In short, a very nice presentation, making Azure services like Event Hubs, Stream Analytics, Service Bus, Logic Apps and BizTalk Server click together!

After the break Mikael Häkansson continued on the IoT train. He showed his experience and expertise by showing off two more demos: first he managed to do some live programming and debugging on-stage. He demonstrated how easy it can be to read sensory data from IoT devices, namely the Raspberry Pi 3. He talked about managing devices such as applications and updates to firm- and software.
In a last demo he showed us how he made an access control device using a camera with facial recognition software and microservicebus.com.

The last sessions

Next up was Tomasso Groenendijk, talking about his beloved Azure API Management. He showed off the capabilities, especially around security (delegation and SSL), custom domain names and such. 

A 3 day conference takes its toll: very interesting sessions, early mornings, long days and short nights. Just as our attention span was being put to the limits (nothing related to the speakers off course), Nick Hauenstein managed to spice things up, almost as much as the food we tasted the last couple of days ;-)

He managed to give us the most animated session of the conference without a doubt. Demonstrated by this picture alone:

(Picture by BizTalk360)

A lightning fast and interactive session, demonstrating his fictional company creating bobbleheads.He demonstrated a setup of Azure Logic Apps and Service Bus, acting and behaving like a BizTalk Server solution. Correlation, long running processes, most of the things we know from BizTalk Server.

Really nice demo and session, worth watching the video for!!

Due to the fact we needed to close up our commercial stand and had a strict deadline on our EuroStar departure with some high traffic in London, we missed Tom Canters session unfortunately. We'll make sure we check that video when they are released in a few weeks.

The conclusion

Integrate 2016 was packed with great sessions, enchanting speakers and great people from all over the world. We had a real great time, meeting people from different countries and companies. The mood was relaxed, even when talking to competitors! Props to the organizers of the event and the people working to make things go smooth at the Platinum Suites of the London Excel. It was great seeing all of these people again after last year, hearing everyones progress since last year's BizTalk Summit 2015.

We take with us the realisation that Azure is hotter than ever. Microsoft is still picking up its pace and new features get added weekly. Integration has never been so complex, but nowadays Microsoft has one - or even more - answers to each question. With such a diverse toolset, both on premise and in the cloud, it does remains hard to keep ahead. A lot of time is spent on keeping up-to-date with Microsoft's ever changing landscape.

We hope you liked our recaps of Integrate 2016, we sure spent a lot of time on it. Let us know what you liked or missed, we'll make sure to keep it in mind next year.

Thank you for reading!

Brecht, Robert, Joachim, Michel and Pieter

 

 

 

Posted on Friday, May 13, 2016 9:12 PM

Brecht Vancauwenberghe by Brecht Vancauwenberghe

Joachim De Roissart by Joachim De Roissart

Michel Pauwels by Michel Pauwels

Pieter Vandenheede by Pieter Vandenheede

Robert Maes by Robert Maes

The Codit recap of the second day at Integrate 2016.

Day 2 of the Integrate 2016 conference passed by. Please find our recap below.

Azure Functions & Azure App Service

Christopher Anderson kicked of the day with a talk on Azure Functions. He showed us some very impressive Azure App Service statistics, clearly showing usage of Azure services is gaining more popularity fast and therefore usage increases every day.

He demonstrated Azure Functions and how they can be utilized in combination with Logic Apps as well. Azure App Service is a cloud platform that enables rapid development of functions, or µ-services as you could call them, which can connect to anywhere. Azure Functions can be called from a Logic App as well, which allows fast and easy extensibility in various programming languages.

If you want to expose a simple µ-service, you should consider Azure Functions. When you are dealing with something more complex, use WebApps, as Azure Functions is designed to keep things simple.

Service Bus

The two sessions dealing with Service Bus were hosted by Dan Rosanova and Clemens Vaster. Two authorities within the integration field. Both also have a more-than-thorough BizTalk Server background!

Dan Rosanova started off looking back to his BizTalk Server days, all while admitting he still loves BizTalk Server. From his expertise he gave us some tips from his experience while transitioning from an on premise BizTalk Server platform to a cloud one. Talking about simplifying solutions as much as possible. Pushing logic to LOB systems, making them smarter, instead of implementing them in the integration layer and masking problems. Keeping messaging simple and fast and pushing any logic or transformations into the endpoints.

Clemens Vaster talked us through the roadmap of Service Bus: the upcoming standardization of Azure Service Bus on the AMQP protocol and - not on the slides, but only verbally communicated - the announcement that Azure Stack for on premise will include Event Hubs and Service Bus, although the latter will no longer be free.
Clemens went on about AMQP and it's capabilities and went into deep-dive detail how protocols like AMQP and MQTT continue to drive innovation. 

Dan also told us earlier, that, for large scale customers, an interesting new model for Event Hubs was created: Event Hubs Dedicated Capacity model. This allows them to have a dedicated cloud infrastructure, not a shared one like the current shared model. This allows them to keep on pushing their dedicated cloud resources to the limits, instead of potentially taking any performance hits due to the shared infrastructure architecture.

Open Source Messaging Landscape

Richard Seroter, a well-known integration mastermind and superhero (again, with BizTalk roots), talked about the open source messaging landscape. At the Integrate 2016 event, we are all Microsoft integration people and we all take what Microsoft does for granted as being top-notch. However, integration experts need to dare to see further than their own technology stack. Being able to stay ahead requires you to seen beyond your own boundaries. Richard is such an expert and showed us some examples of what is available in terms of messaging platforms, next to the Microsoft stack.

kafka, RabbitMQ and NATS were the chosen few. Richard demoed each one in several scenarios. Really impressive to see such lightweight messaging platforms performing very well on simpler hardware. Talking about the pros and cons of each one and comparing them to their Azure 'neighbor'.

Worth mentioning - and really impressive - was NATS in particular. A 3MB lightweight 'central nervous system', capable of handling millions of messages per second, with only minimal latency.

Last, but not least was the fact that Richard is very impressed with the work Microsoft is doing every day, driving Azure services further every day. He also had - by far - the best slidedeck!

HIS2016 announcements

Paul Larsen, Group PM Manager for the Host Integration Server (HIS) in Microsoft and Steve Melan, Integration MVP had some news regarding the upcoming release for HIS 2016.

What was already announced for HIS 2016 was the platform alignment for both Windows Server and IBM platforms, an improved installation experience and improvements regarding message and data integration among others.

Today, they surprised everyone with the announcement that HIS 2016 will contain brand new clients for MQ, DB2 and Informix, written in .NET, by Microsoft. According to their demo, you can expect a huge performance increase. Most likely because the new client does not use COM InterOp. Also, it offers better tracking and tracing.
Next to that, BizTalk Server CTP2 will contain new adapters for MQ, Informix and DB2 and also include better integration into the Visual Studio IDE!

Azure Service Fabric

Next up was our colleague and CTO Sam Vanhoutte.
Speaking from our own experience in the field, he gave the crowd some insights on when to use which part of the Azure Service Fabric ecosystem.

Microsoft has been using Service Fabric for a long time inside their data centers, and decided to make it more user friendly, so customers could use it for their own purposes. Applications moved from a monolithic approach, in which an application was a single 'package', with a deployment to each machine, to a µ-services-approach.
Splitting up functionality of a larger application in simpler µ-services allows much better reuse and 'pluggability' of these µ-services and also allows to scale really fast, faster deployments and better utilisation of Virtual Machines.

There are several ways of working with Azure Service Fabric, for which Sam provided us with a few tips when to use stateless or stateful services / actors. Towards the end of the session, Sam gave a few scenarios in which he used the various aspects of Azure Service Fabric in the field like the Codit IoT Field Gateway.

Integration Superheroes

Next up was Michael Stephenson, giving us a talk on how he sees the evolution from a specialized integration team - like most companies work today - to an approach where each team has at least one integration specialist (aka superhero). A necessary change as companies nowadays are not quite happy with how it works. 

BizTalk Server

The last 3 sessions of the day were focused on BizTalk Server. I have a feeling a lot of people in the room were anticipating these sessions and had some pretty high expectations.

Johan Hedberg brought us a real refreshing session about a typical project lifecycle, but instead of using Team Foundation Server (TFS), he showed us what was possible with Git. Especially noteworthy was the more relaxed branching strategy of Git in comparison to TFS and the tooling around it. Usage of Stash (now called Bitbucket Server) for Git flow management with Jenkins as a deployment automation management tool was really a great way to show off the capabilities around Git and also the Elastic Stack.

Sandro Perreira continued on the session from last years BizTalk Summit with another set of tips and tricks regarding BizTalk migrations.
The last session was Nino Crudele who, in his known, humours style, brought us some insights on how delicate a BizTalk assessment can be.

 

Also a big congratulations to Axon Olympus, which is part of Codit International. They managed to bring home the 'BizTalk360 Partner of the Year' award for the second time in a row! Well done!

Thank you for reading.

Brecht, Robert, Joachim, Michel and Pieter

Posted on Thursday, May 12, 2016 12:14 AM

Brecht Vancauwenberghe by Brecht Vancauwenberghe

Joachim De Roissart by Joachim De Roissart

Michel Pauwels by Michel Pauwels

Pieter Vandenheede by Pieter Vandenheede

Robert Maes by Robert Maes

The Codit recap of the first day at Integrate 2016.

Event

Joined by our new UK colleagues today, Codit International was represented with no more than 4 countries at Integrate 2016. With a total of 18 members, we were the most represented company at the event!

Bigger and better than last London Summit, so let's summarize the key take-aways from today.

BizTalk + Logic Apps = A hybrid Vision

Microsoft admitted this morning that last year's vision was too cloud focused. Today they adjusted that vision to a hybrid one. One that moves from fully on premise solutions to a hybrid approach. Considering that most companies have only a fraction of their applications running in the cloud.

Their vision of hybrid integration becomes a marriage between Microsoft BizTalk Server and Azure Logic Apps. Customers can now connect both traditional on-premises applications to cloud-native applications seamlessly.


(Picture by Eldert Grootenboer)

What's new in BizTalk?

Additional to the earlier announced features, Microsoft revealed some extra news on the upcoming release of BizTalk Server 2016 RTM:

- A new Logic Apps adapter which enables integration between SaaS and on premise LOB applications.
- Some UI tweaks, which they labeled as a "Repaint" for a consistent look and feel. This however feels like it still needs some more polishing.

So all-in-all the tried and thoroughly tested, existing BizTalk engine remains the same. Fanc(e)y new features on that were not announced.

The planned roadmap for BizTalk Server 2016 RTM remains the same: 2016 Q4. A CTP2 is planned for 2016 Q3 (summer).

What's new in Logic Apps?

A lot more focus on the updates in Logic Apps today:

  • Logic Apps are coming to the Visual Studio IDE in the coming weeks!
  • It will be possible to reuse existing BizTalk Server .BTM mapping files into Logic Apps. There will be full support for BizTalk Server mapping functoids and custom code!
  • Continuous Integration: allowing to add Logic Apps to source control and have versioning, etc.… 
  • A new BizTalk connector which is the counterpart of the Logic Apps adapter for BizTalk Server.
  • Added extensibility allowing to run custom code.
  • New patterns: request/response, solicit/response and "fire and forget"
  • The addition of some new concepts: addition of scopes which allows nesting of components with individual compensation block (e.g. exception handling on each step). Coming in the next few weeks.
  • The addition of tracking and the possibility to debug. To enable you to pinpoint any Logic Apps instances, you can now track custom properties. Something else which is new is chain tracking which allows you to chain all tracking events in your instance.
  • Polling http endpoints is now a possibility!
  • Run logic apps on a schedule.

Also the introduction of a new concept: Enterprise Integration Pack with the Integration Account. This type of Logic Apps account receive the following capabilities:

  • Containers for centralized storage of schemas, transforms, partners and certificates. A predecessor for an upcoming trading partner management addition in the portal.
  • A VETER pipeline concept: A logic app can be designed to Validate, Extract, Transform, Enrich and Route. Adding possibilities to route to topics and add exception handling on any step.
  • X12 and AS/2 support will be added in the future. EDIFACT will be added further down the road, together with TPM and party resolution.

Integrations made simple with Microsoft Flow

At first sight Microsoft Flow looks and feels like IFTTT or Zapier: a recipe based tool to automate workflows across different apps and services. However, this service is based on Logic Apps and used the Logic Apps designer.
For example: integrating Twitter with Dropbox in a couple of clicks.

It might look like just another clone, but Microsoft Flow is much more advanced:

  • More advanced flow design: more branches, more steps
  • More connectors
  • You will be able to add your own custom APIs.
  • Debug the flow and trigger when you need it.
  • Flows can be shared to anyone.

Microsoft Flow brings a lot to the table, but consider the security concerns when handling integrations in a corporate environment. When using Flow, take into account the potential sensitive nature of the data you are working with.

Flow is free at the moment, but will possibly be licensed on a per-user basis. According to Microsoft: in a cheap license model.

Conclusion

A busy first day brings a lot of news and exciting new features. We are all looking forward to the next day of #Integrate2016.

Thank you for reading.

Brecht, Robert, Joachim, Michel and Pieter