wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Tuesday, November 22, 2016 5:14 PM

Pim Simons by Pim Simons

Why do HTTP 400 status codes return a NACK, while HTTP 500 status codes return an ACK? And how to deal with this...
This post describes an interesting scenario we encountered at a customer and our considerations for dealing with it.

At one of our customers I had implemented the ReturnAddress messaging pattern (http://www.enterpriseintegrationpatterns.com/patterns/messaging/ReturnAddress.html), by using a generic BizTalk application which sends an asynchronous response message to a client application. The solution had been running successfully for some time, when we encountered a strange situation.

The BizTalk application uses a one-way WCF-Custom send port, using a wsHttpBinding, to send a message to the client application. Also, I had added the Delivery Notification functionality to make sure messages are delivered successfully.

It is important to realize that one-way send ports that use SOAP will receive a technical response containing an HTTP status code. If and when the send ports receive a HTTP status code in the 200 range, the Delivery Notification generates an ACK and BizTalk knows the message was successfully delivered.

So far, so good. The application had been through testing on the Test and Acceptance environments, had been deployed to the Production environment and had been running for several months without any problems. Until it appeared that some of the messages that were being sent using the generic BizTalk application were ‘not arriving’ at the client application. This would happen at random and, what was really strange, the logging in BizTalk showed that the message was successfully sent and BizTalk had received an ACK response as part of the Delivery Notification. Also there was no mention of an error in the event log of the BizTalk servers.

After some debugging we found the source of the problem. The message sent by BizTalk was successfully received by the client application, however the client application encountered an error processing the message and returned the HTTP 500 status code. So now the question was, why is the Delivery Notification not generating a NACK response when a HTTP 500 status code is received? I had expected that any status code in the HTTP 400 and 500 range would result in a NACK.

This turned out not to be the case. While status codes in the HTTP 400 range will result in a NACK, the status codes in the HTTP 500 range will result in an ACK and BizTalk will view this message as successfully delivered at the client application. The logic behind this seems to be that the status codes in the HTTP 400 range indicate that the message was not received by the client application (hence the NACK) and the status codes in the HTTP 500 range indicate that the message was received by the client application, but that the client application encountered an exception. Since the message was delivered at the client application, BizTalk views this as a successful delivery and will generate an ACK as part of the Delivery Notification.

Unfortunately, there isn’t any documentation on MSDN on which status codes will return an ACK or NACK.

The documentation on the SOAP HTTP response states that “In case of a SOAP error while processing the request, the SOAP HTTP server MUST issue an HTTP 500 "Internal Server Error" response and include a SOAP message in the response containing a SOAP Fault element indicating the SOAP processing error”. For reference, see https://www.w3.org/TR/2000/NOTE-SOAP-20000508/#_Toc478383529.

Some discussion followed on the validity of catching the HTTP 500 error in BizTalk, since the message was successfully delivered and accepted by the client application. That means that, from a technical perspective, the responsibility would now lie at the client application to handle the error. From a functional responsibility perspective however, it was decided to find a way to catch the HTTP 500 error in BizTalk, as this would enable the customer's administrators to use the same resubmit functionality we had created by using a generic BizTalk error handling framework.

So I had to make sure the HTTP 500 status code was somehow caught, so that BizTalk would return a NACK which would result in the error handling catching the error. Fortunately, this can be achieved quite easily by implementing a WCF behavior on the one-way send port. The WCF behavior checks in the AfterReceiveReply message inspector if the reply is a fault message, and if so it will throw an exception using the fault description.

By implementing this WCF behavior on a one-way send port BizTalk will generate a NACK when a response is received with an HTTP status code in the 400 or 500 range. Sometimes the default behavior surrounding technical responsibility doesn’t align with the requirements and responsibilities from a functional point a view, and this may just offer a solution for you as well.

Categories: BizTalk
written by: Pim Simons

Posted on Monday, October 3, 2016 3:48 PM

Pieter Vandenheede by Pieter Vandenheede

For BizTalk Server developers, given the choice between XSLT and the BizTalk mapper, opinions tend to change. This is my view on the purpose and value of both, trying to provide the ultimate answer: should I use XSLT or the mapper to create BizTalk maps?

Here at codit.eu, we are always eager to try and do things as fast as possible and as efficient as possible. Lately, there has been several cases at clients/prospects, where I had to make a case for the "Codit" way of working and more specifically, how we make a BizTalk mapping... Some experienced BizTalk developers tend to look at me with an awful look of disgust, when I tell them we do not use the BizTalk mapper for mappings... at all. Yes, at Codit, we always use custom XSLT in our BizTalk mappings. And this blog post will try and explain why! First, let me talk to you about the difference between the BizTalk mapper and custom XSLT...

What is XSLT?

  • XSLT or Extensible Stylesheet Language Transformations is a language for transforming XML documents into other XML documents, HTML, plain text or xsl-fo.
  • XSLT is, in contrary to most programming languages, declarative instead of imperative. It's based on pattern matching. Rather than listing an imperative sequence of actions to perform, templates define how to handle a node matching a particular XPath-like pattern.
  • XSLT uses XPath.
  • XSLT was not made for BizTalk Server, it is BizTalk Server implementing XSLT 1.0. To this day it still remains on this version, including BizTalk Server 2016 CTP2! XSLT 2.0 has been out for a long while now, but BizTalk remains at 1.0, due to the fact that .NET does not offer support for XSLT 2.0. With XSLT 3.0 out last year, one might wonder if XSLT 2.0 support will ever come...

What is the BizTalk mapper?

  • The BizTalk mapper is a, very nifty, visualisation tool, created to visualize a mapping between a source and a target schema.
  • The BizTalk mapper is quite easy to use, especially since it uses a drag-and-drop mechanic to map one field to another.
  • Using functoids, a developer can loop/modify/adapt/select certain data before putting the result in the resulting output.
  • Using more than one cascading functoids, one can easily chain up these operations to allow a more complex mapping.
  • The BizTalk mapper generates XSLT 1.0!
  • The BizTalk mapper facilitates complex mappings, by using pages in the grid view

Let's compare!

Comparing one to the other is always hard, especially if you are in favor of one in particular. Let's try to be as objective as possible anyway... Let me know in the comments if you find otherwise!

Performance - winner: XSLT
XSLT: SCORE Mapper
Custom XSLT is - unless you are working with really easy maps - almost always better performing. The reasoning behind this is that the mapper will for - example - create too much variables for every substep. Any developer optimizing a mapping will see straight away that these might be optimized. The mapper is a tool which generates XSLT. For easier mappings, the XSLT will be as good as anyone can write it. The moment it gets more complex, you would be able to tweak the generated XSLT code to perform better.

(So far: XSLT 1 - 0 Mapper)

Ease of use - winner: Mapper
XSLT Mapper: SCORE
For some reason XSLT is something special. People tend to be afraid of it when they do not know it. As it happens, not many people tend to easily write XSLT, so there is a certain threshold to get over. For people already knowing XSLT, it flows naturally. The mapper is built to be intuitive and easy to use, for the untrained BizTalk professional and the seasoned BizTalk veteran. There are hundreds of scenarios you can tackle easily with it, only for some there is a need for a custom functoid or some custom XSLT.

(XSLT 1 - 1 Mapper)

Source Control - winner: XSLT
XSLT: SCORE Mapper
If you use a custom XSLT file, you need to add it to your solution and also to your source control. For every check-in you perform, you get a perfect version history: you can clearly see each and every byte of code that was changed, since it's just text, like any source code you write in .NET. The mapper is more complex for source control versioning. Your .btm file contains representations of the graphical links you made by dragging and dropping. It contains codes for every page, functoid, etc... and it's location on the grid. Updating a mapping can affect a whole lot more code than just your small change.

(XSLT 2 - 1 Mapper)

Maintainability - winner: draw
XSLT: SCORE Mapper: SCORE
It might take some time to ‘dive’ into a mapping when working with XSLT. But the same can be said from the mapper.
Making small changes can be as easy as searching for the node(s) you need to change and updating the code.
It might take some time to ‘dive’ into a mapping when working with the mapper. Especially when working with multiple pages and complex links and functoids, in several cases it might even take longer. However, just like in XSLT, it depends how you structure your map.

(XSLT 3 - 2 Mapper)

NEW Interoperability  - winner: XSLT
XSLT: SCORE Mapper
XSLT can be run anywhere and there is support for it everywhere. Visual Studio, Notepad++ (XML Tools plugin), Altova, Eclipse, oXyGen, etc... It can be run on lots of editors, can be run from .NET/Java/etc... XSLT is a standard, here to stay, proven and tested. Be sure however, to keep yourself to XSLT 1.0! Try to avoid inline c# code or extension objects or your interoperability is also gone! Unfair competition for the mapper. The mapper is available in the BizTalk Developer Tools for Visual Studio. Your existing mappings will however be transferable to Logic Apps, with existing functoids. But this is nowhere near as interoperable compared to XSLT.

(XSLT 4 - 2 Mapper)

Debugging - winner: draw
XSLT: SCORE Mapper: SCORE
XSLT can be debugged from within Visual Studio. Open your XSL file and click Debug. Easy. The mapper can be debugged, just like XSLT. You can step into functoids. Just as easy.

(Winner: XSLT 5 to 3)

This is how we (Codit) do it

At Codit, it is custom to do practically everything in custom XSLT. However, we are not ignorant of the mapper. It is a great tool and not using it for what it does best, would be such a waste. So this is our way of working:

  1. Create your mapping file (.btm) and select the source and target schemas.
  2. Link the fields you need in your specific mapping, using the BizTalk mapper, but do not use any functoids.
  3. Validate your mapping in Visual Studio, locate the XSLT and place it in a 'xsl' subfolder, using the same filename as your btm file.
  4. Assign the XSL file to your BTM file and make sure to delete all of the links in your grid view. This ensures any future developer looking at the code, that no mistakes can be made: it's all in the custom XSLT.
  5. Edit your custom XSLT and enjoy your freedom!

Some XSLT tips & tricks

Here are some additional tips a tricks I like to tell our developers which are starting off their integration career:

  • Use proper spacing in your XSLT! Empty lines between stylesheets, empty XML comments before and after a <xsl:for-each/> make your structures stand out so much more.
  • Use proper, clear and descriptive variable naming. It make such a difference.
  • Write and use comments for "future-you"! Don't give "future-you" any reason to hate "past-you", because you will regret that extra 5 minutes you neglected to spend on comments while you still 'got it'.
  • Don't do math in XSLT! Don't tell anyone, but it's not very good at it. Use extension objects or specific math functions.
  • Avoid inline C# code in your XSLT code at all costs. We have seen that inline C# code in your mapping may result in memory leaks if you call your mapping from a custom pipeline component for example.
  • Stylize the first line of your stylesheet. Put all namespaces on a separate line for example, for easier readability.

Conclusion

XSLT is the way to go! Although it does mean you need to invest in yourself. XSLT 1.0, XPath 1.0, etc... these are things you will need to learn. However, consider this is a good investment! Knowledge of XSLT can be used in several fields, from front-end design to PDF generation, it is something you will need at some point and it is very easy to learn! Also consider this: as a BizTalk / integration consultant: people using the mapper will not easily be able to handle an XSLT-file. People who know XSLT, can do both, since any BizTalk map can be converted to XSLT in a few seconds. Also this: whenever things get really complex, the developers which favor the mapper, still might need to copy/paste some of that custom XSLT in their scripting functoids to make their mapping work.

If you are interested in learning XSLT, please check the reference material provided at the end of this post. Also, be aware that Codit offers quite an extensive XSLT training, designed for integration products like BizTalk Server, Logic Apps, etc...

Please let me know if you have any remarks/comments. I'll be happy to elaborate further or to review some sections, given enough feedback.

 

For now:happy XSLT-ing!

 

 

Reference material

XSLT W3 Schools Tutorial - http://www.w3schools.com/xsl/  

 

Note: this post also appeared on https://pvandenheede.wordpress.com/2016/09/20/the-case-for-xslt/

Categories: BizTalk
written by: Pieter Vandenheede

Posted on Wednesday, June 22, 2016 8:58 AM

Maxim Braekman by Maxim Braekman

Have you set up communication with a web service before and had to find a way to keep track of some information that was available in the request, but is no longer present in the response? Continue reading, since this post could help you sort out this problem.

Setting up communication with a web service can always turn out to be a bit tricky. You need to take care of configuring the certificate-settings, if required, configure the bindings to use the correct protocol, security and so on. But once all of these settings are correct and you start testing, now and again, depending on the scenario, you might notice you are losing some useful data across two-way communication, since some of the data which was available in the request, no longer appears to be in the response.
In such a case, one could opt to use an orchestration, although this is not the best solution, performance-wise.

An alternative way of storing those values is by creating a static class which will be storing the data based on the Interchange ID of the message. Since this static class needs to be told what data it needs to track, 2 custom pipeline components are needed. Why 2 components? You’ll need one to pass on the data from the message into the static class and another to retrieve those values from the class and pass them back onto the response message.

Yes, this can also be done by merging the components into a single one and using a property to indicate the direction, but for the sake of this post we will be using 2 separate components, just to keep everything clear.

Imagine a large system, such as AX, which is containing a whole bunch of data about several orders, but needs to retrieve some additional information - from another system - before processing these orders. Since these requests could be working asynchronously, in the AX point-of-view, the source-system will need some kind of ID to match the response to the initial request. In this case the request that is being sent towards BizTalk will be containing an orderID or requestID or any other form of identification, just to make sure each response is matched to the correct request.

Okay, so now this request, containing the ID, has arrived in BizTalk, but since the destination-system has no need for any ID from an external system, no xml-node will be provided for this ID, nor will it be returned within the response. In such a situation this issue becomes a “BizTalk-problem” to be resolved by the integration developer/architect.

This is when the use of the aforementioned static class comes in handy. Since the actual call to the destination-system is a synchronous action and there is no need for an orchestration to perform any additional actions, we can simply use the custom pipeline components to store/retrieve the original ID, assigned by the source-system.

The custom pipeline components

The static class might look as the example below, which allows BizTalk to save a complete list of context-properties for a specific interchangeID.

Next, you’ll be needing the pipeline components to actually access this class and allow for the data to be saved and restored when needed.

This post will not be zooming into the code of these pipeline components, but below is the general explanation of what these components are supposed to do.

SaveContextOverCallRequest

The first custom pipeline component will retrieve the InterchangeID from the context of the incoming message and use this as a unique ID to save a specified list of properties to the static class. This list could be scaled down by setting a specific namespace, which can be used to filter the available context properties. This would make sure only the properties from the specified namespace are being saved, preventing an overload of data being stored in memory.

SaveContextOverCallResponse

The second custom pipeline component will again retrieve the InterchangeID from the context of the incoming message, but this time it will use this value to retrieve the list of context-properties from the static class. Once the properties have been collected, there is no need for the static class to keep track of these values any longer, therefore it can remove these from its dictionary.

Next, you’ll be needing the pipeline components to actually access this class and allow for the data to be saved and restored once needed.

This post will not be zooming into the code of these pipeline components, but below is the general explanation of what these components are supposed to do.

Using the component

Once these components have been created, they will have to be added to the send/receive pipeline, depending on the type of component.

The send pipeline will contain the ‘SaveContextOverCallRequest’-component to make sure the required properties are being saved. The custom pipeline component should be the last component of this pipeline, since you want to make sure all of the property promotion is finished before the properties are being saved into the static class.

The receive pipeline will contain the ‘SaveContextOverCallResponse’-component, as this will be restoring the saved properties to the context. This should also be the first component in this pipeline, because we want the saved properties to be returned to the context of the message as soon as possible, to make sure these values are accessible for any further processing. Be aware that whether or not you are able to put this first, will laregely depend on your situation and transport protocol.

Example

To show the functionality of these components, a simple test-case has been set up, in which a request-message is being picked up from a file-location, a connection is made with a service and the response is sent back to a different folder. To give you an idea of the complete flow, the tracking data has been added here.

The request that will be used in this sample is a pretty simple xml-message, which can be seen below:

<ns0:GetData xmlns:ns0="http://Codit.Blog.Stub">
                <ns0:request>
                               <ns0:RequestDate>2016-06-15</ns0:RequestDate>
                               <ns0:RequestNumber>0002</ns0:RequestNumber>
                               <ns0:CustomerID>0001</ns0:CustomerID>
                               <ns0:Value>Codit offices</ns0:Value>
                </ns0:request>
</ns0:GetData>

              

 

 

 

 

 

As you can see, this request contains both a request- and customer-ID, which are the two ‘important’ values in this test-case. To make sure these properties are available in the context of the message, we made sure these are being promoted by the XML Disassembler, since the fields are indicated as promoted in the schema. Once the flow is triggered, we can have a look at the context properties and notice that the 2 values have been promoted.

The initial raw response that comes back from the service, which is the message before any pipeline processing has been performed, no longer contains these context-properties and neither does it contain these values in the body.

<GetDataResponse xmlns="http://Codit.Blog.Stub">
                <GetDataResult xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
                               <ResponseDate>2016-06-15T20:06:02.8208975+01:00</ResponseDate>
                               <Value>Ghent, Paris, Lisbon, Zurich, Hampshire</Value>
               </GetDataResult>
</GetDataResponse>

However, if we have another look at the context properties after the receive pipeline has done its job, we notice the properties are back in place and can be used for further processing/routing/....

Conclusion

Whenever you need to save a couple of values cross-call, there is an alternative solution to building an orchestration to keep track of these values.

Whenever you are building a flow, which will be saving a huge amount of data, you could of course build a solution which saves this data to disk/SQL/...,  but that is all up to you. 

Categories: BizTalk
written by: Maxim Braekman

Posted on Monday, May 2, 2016 5:34 PM

Korneel Vanhie by Korneel Vanhie

Due to the XslCompiledTransform library in BizTalk 2013, user exceptions thrown by Xsl Terminate are no longer visible in the application log.

Recently we received a comment from one of the students in our XSLT Training.

As shown in one of the examples he tried to throw a custom exception from an XSLT mapping and expected to find his error message in the application event log.

What he saw however was the following generic exception message:

The Messaging Engine failed while executing the inbound map for the message coming from source * with the Message * Details:"Exception has been thrown by the target of an invocation." 

Upon inquiring about the BizTalk version used (BizTalk 2013 R2), we had an inkling what might be the issue and tried to reproduce it, using the following XSLT:

After executing this map from a Send or Receive port we found the same generic exception in the event log:

The Messaging Engine failed while executing the inbound map for the message coming from source * with the Message * Details:"Exception has been thrown by the target of an invocation." 

When debugging the application and catching the exception, we did find the custom user exception message in the inner-exception.  

To validate this behavior was new in 2013 we reverted to the old mapping engine as described in this blogpost.

Sure enough, after failing another message, we got the following entry in the event log:

The Messaging Engine failed while executing the inbound map for the message coming from source URL: * with the Message Type *. Details:"Transform terminated: 'Error: Mapping Terminated from XSL"

Until recently we had to choose either to throw custom exception or use the compiled transform library, which is a shame. To receive readable exceptions from the mapping can greatly simplify the flow. 

Luckily, starting from BizTalk 2013R2 CU2, you can select the mapping engine on a per transformation basis (https://support.microsoft.com/en-us/kb/3123752).

 

 

 

 

 

 

 

 

Categories: BizTalk
written by: Korneel Vanhie

Posted on Thursday, April 28, 2016 3:17 PM

Toon Vanhoutte by Toon Vanhoutte

Jonathan Maes by Jonathan Maes

A real life example of how redis caching improved the performance of a large scale BizTalk messaging platform significantly.

With some colleagues of Codit, we’re working on a huge messaging platform between organizations, which is built on top of Microsoft BizTalk Server. One of the key features we must deliver is reliable messaging. Therefor we apply AS4 as a standardized messaging protocol. Read more about it here. We use the AS4 pull message exchange pattern to send the messages to the receiving organization. Within this pattern, the receiving party sends a request to the AS4 web service and the messaging platform returns the first available message from the organizations inbox.

Initial Setup

Store Messages

In order to support this pattern, the messages must be stored in a durable way. After some analysis and prototyping, we decided to use SQL Server for this message storage. With the FILESTREAM feature enabled, we are able to store the potential large message payloads on disk within one SQL transaction.

(1) The messages are stored in the SQL Server inbox table, using a BizTalk send port configured with the WCF-SQL adapter. The message metadata is saved in the table itself, the message payload gets stored on disk within the same transaction via FILESTREAM.

Retrieve Messages

As the BizTalk web service that is responsible for returning the messages will be used in high throughput scenarios, a design was created with only one pub/sub to the BizTalk MessageBox. This choice was made in order to reduce the web service latency and the load on the BizTalk database.

These are the two main steps:

(2) The request for a message is received and validated on the WCF receive port. The required properties are set to get the request published on the MessageBox and immediately returned to the send pipeline of the receive port. Read here how to achieve this.

(3) A database lookup with the extracted organization ID returns the message properties of the first available message. The message payload is streamed from disk into the send pipeline. This avoids that a potential large message gets published on the MessageBox. The message is returned via this way to the receiving party. In case there’s no message available in the inbox table, a warning is returned.

Potential Bottleneck

The pull pattern puts a lot of additional load on BizTalk, because many organizations (+100) will be pulling for new messages within regular time intervals (e.g. each 2 seconds). Each pull request is getting published on the BizTalk MessageBox, which causes extra overhead. As these pull requests will often result in a warning that indicates there’s no message in the inbox, we need to find a way to avoid overwhelming BizTalk with such requests.

Need for Caching

After some analysis, it became clear that caching is the way to go. Within the cache, we can keep track of the fact whether a certain organization has new messages in its inbox or not. In case there are no messages in the inbox, we need to find a way to bypass BizTalk and return immediately a warning. In case there are messages available in the organization’s inbox, we just continue the normal processing as described above. In order to select the right caching software, we listed the main requirements:

  • Distributed: there must be the ability to share the cache across multiple servers
  • Fast: the cache must provide fast response times to improve message throughput
  • Easy to use: preferably simple installation and configuration procedures
  • .NET compatible: we must be able to extend BizTalk to update and query the cache

It became clear that redis meets our requirements perfectly:

  • Distributed: it’s an out-of-process cache with support for master-slave replication
  • Fast: it’s an in-memory cache, which ensures fast response times
  • Easy to use: simple “next-next-next” installation and easy configuration
  • .NET compatible: there's a great .NET library that is used on Stack Overflow

Implement Caching

To ease the implementation and to be able to reuse connections to the cache we have created our own RedisCacheClient. This client has 2 connection strings: one to the master (write operations), and one to the client (read operations). You can find the full implementation on the Codit GitHub. The redis cache is implemented in a key/value way. The key contains the OrganizationId, the value contains a Boolean that indicates whether there are messages in the inbox or not. Implementing the cache, is done on three levels:

(A) In case a warning is returned that indicates there’s no message in the inbox, the cache gets updated to reflect the fact that there is no message available for that particular OrganizationId. The key/value pair gets also a time-to-live assigned.

(B) In case a message is placed on the queue for a specific organization, the cache gets updated to reflect the fact that there are messages available for that particular OrganizationId. This ensures that the key/value pair is updated as new messages arrive. This is faster than waiting for the time-to-live to expire.

(C) When a new request arrives, it is intercepted by a custom WCF IOperationInvoker. Within this WCF extensibility, the cache is queried with the OrganizationId. In case there are messages in the inbox, the IOperationInvoker behaves as a pass-through component. In case the inbox of the organization is empty, the IOperationInvoker bypasses the BizTalk engine and immediately returns the warning. This avoids the request to be published on the message box. Below there's the main part of the IOperationInvoker, make sure you check the complete implementation on Github.

Results

After implementing this caching solution, we have seen a significant performance increase of our overall solution. Without caching, response times for requests on empty inboxes were on average 1,3 seconds for 150 concurrent users. With caching, response times decreased until an average of 200 ms.

Lessons Learned

Thanks to the good results, we introduced redis cache on other functionality in our solution. We use it for caching configuration data, routing information and validation information. During the implementation, we encountered some lessons learned:

  • Redis is a key/value cache, change your mindset to use it to the maximum.
  • Re-use connections to the cache, as this is the most costly operation.
  • Avoid serialization of cached objects.

Thanks for reading!
Jonathan & Toon