Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Monday, February 19, 2018 11:00 AM

Nicolas Cybulski by Nicolas Cybulski

A best practice guide when faced with uncertain results during a project due to lack of information.

Kicking the can down the road

I guess it has happened to the best of us. You’re on a project, whether it is during sales, analysis or even development, and your customers utters the sentence: “Well, since we don’t have all the information yet, let’s take this vague path, and we’ll see how it turns out.” The customer just kicked the can down the road.

This innocent sentence is a potential trap since you’re now dealing with a “known unknown”, or to put it in other words: “A Risk”.

When you’re in a similar situation your first reflex should be to ask the following 3 questions:

  • WHEN will we evaluate the solution? (Planning)
  • WHAT will be the conditions against which to evaluate the solution? (Scope)
  • HOW will we react if the solution does NOT pan out like we’ve expected? (Action)

Planning (WHEN)

So, the first question you need to ask is: “WHEN”. It is important that a fixed point in time is communicated with the customer to assure that the new solution is being evaluated.

It is up to the customer and the project manager to decide how much risk they want to take. Are they prepared to wait until the entire solution is developed before evaluating? In this case, the possibility exists that the solution is not fit for use and everything needs to be redone.

A better way to go (especially when the proposed solution is uncertain or vague) is to cut it into smaller iterations and evaluate early. This way, problems can be caught in an early stage and corrective actions can be performed.

Scope (WHAT)

Now that we’ve set out one or more moments of evaluation, it is important to define exactly WHAT we’re going to evaluate. Since a customer (or even your team itself) decided to take an uncertain approach to a vaguely described scope (e.g. we need to generate reporting, yet we don’t know exactly what tool to use, or how data should be represented) it is important that everybody is on the same page as to what criteria need to be fulfilled.

This evaluation process is linked to planning. The later you evaluate the solution, the more precise the scope should be, since there is little or no way of correcting the approach afterwards.

Once again, the greater the uncertainty of the scope, the shorter the iterations should be, but even short iterations need to have fixed criteria, and deliverables up front. These criteria define the scope of the iteration.

Action (HOW)

Last, but most definitely not least, the customer needs to be informed about potential actions, should the result of the evaluation turn out to be unsatisfactory. Usually the “we’ll see” sentiment originates in the inability (or unwillingness) to make long term decisions.

However, kicking the can down the road, is never a good strategy when dealing with projects. Sooner or later these potential time bombs come to detonate, usually in a late stage of the project when budgets and deadlines are tight.

So, it is of utmost importance that the customer is made aware that the suggested solution might not work out. A worst-case scenario needs to be set up in advance, and follow up actions (extra budget, change in deadlines, drop the feature altogether, …) need to be communicated.


So, in conclusion: when working on a project it is important not to fall in the trap of pushing risks down the road because information is missing.

Either delay the development until more information is available, or adapt iterations according the vagueness of the proposed solution. Plan your evaluation moments, define the scope of each iteration and communicate a plan “B” if the worst would come to happen.


Hope you enjoyed my writing!
Please don't hesitate to contact me if you think I totally missed or hit the mark.

Posted on Wednesday, February 7, 2018 1:20 PM

Pim Simons by Pim Simons

During a migration from BizTalk 2006R2 to BizTalk 2016 we ran into an issue with the “ESB Remove Namespace” pipeline component. This component is available out-of-the box when the ESB Toolkit is installed and is used to remove all namespaces from a message.

After successfully completing the migration and putting the new environment in production the customer also needed to process a new message. As with the other messages received in their BizTalk environment, all of the namespaces have to be removed from the message and a new one is added. For this the customer had used the “ESB Remove Namespace” and “ESB Add Namespace” pipeline components and this setup had been successfully migrated to BizTalk 2016. For more information on these pipeline components, see:

However, when the new message was received by BizTalk we received this error:

Reason: The XML Validator failed to validate.
Details: The 'nil' attribute is not declared.

It turned out the XSD of the new message has a field that can be nillable, in the message we received the field was indeed marked as nillable. The “ESB Remove Namespace” pipeline component removed the xsi namespace and prefix which caused the “XML validator” pipeline component to fail!

Changing the message content or the XSD was not an option since the XSD was created and maintained by the external party which was sending us the message. We ultimately ended up recreating the “ESB Remove Namespace” pipeline component into a custom pipeline component and modifying the code.

The “ESB Remove Namespace” pipeline component contains code to process the Attributes which contains this snippet:
if (string.Compare(inReader.Name, "xmlns", StringComparison.OrdinalIgnoreCase) != 0 && !inReader.Name.StartsWith("xmlns:", StringComparison.OrdinalIgnoreCase))
                writer.WriteStartAttribute("", inReader.LocalName, "");
We replaced this with:
if (inReader.LocalName.ToLower() == "nil" && inReader.NamespaceURI.ToLower() == "")
                writer.WriteStartAttribute(inReader.Prefix, inReader.LocalName, inReader.NamespaceURI);
else if (String.Compare(inReader.Name, "xmlns", StringComparison.OrdinalIgnoreCase) != 0 &&
                !inReader.Name.StartsWith("xmlns:", StringComparison.OrdinalIgnoreCase))
                writer.WriteStartAttribute("", inReader.LocalName, "");
Now when we receive a message containing a node that is marked as nil, the custom “ESB Remove Namespace” pipeline component handles it correctly and the message is processed.

We could not find any information about the “ESB Remove Namespace” pipeline component not supporting nillable nodes on MSDN and I find it strange that the pipeline component does not support this. To me this seems like a bug.

Categories: BizTalk
written by: Pim Simons

Posted on Tuesday, February 6, 2018 2:05 PM

Glenn Colpaert by Glenn Colpaert

Exposing data with different content types in a performant way can be a challenging problem to solve, Azure Search tackles this problem by providing a full-text search experience for web and mobile applications.

When talking about modern application development, where enterprise applications exist both on-premises and in the cloud, companies want to integrate beyond their firewall typically with SaaS based applications or APIs exposed by third parties.

Next to integrating with different services or applications, many companies also want to expose data in a simple, fast and secure way through API first development. Performance and a great experience is the key to success with your APIs. This is where Azure Search comes in...


For a datasharing platform that we are currently building, we have a large amount of files stored on Blob Storage. These files are being ingested into the platform through different endpoints and protocols (HTTPS, FTP, AS2,...). When files are ingested in the platform, different types of metadata are added to the file before storing it to Blob Storage. These metadata values are extracted from the file content and provided as metadata. These metadata values will be made available for querying through APIs.

Our first implementation of the API was directly querying the Blob Storage to search for specific files that match our metadata filters that are provided in the API calls. We started noticing the limits of our implementation because the large amounts of Blobs inside our storage container. This is related to the limited query capabilities of Blob Storage, we needed to list the blobs and then do the filtering inside the implementation based on the metadata. 

To optimize our searches and performance we quickly introduced Azure Search into our implementation. To get things straight, Azure Search is not in charge of executing queries across all the blobs in Azure Blob Storage but more to index all the blobs in the storage account with a search layer on top of it. 

Azure Search

Azure Search is a search-as-a-service cloud solution that gives developers APIs and tools for adding a rich search experience over your content in web, mobile, and enterprise applications. This all without managing infrastructure or the need to become search experts.

To use Azure Search there are a couple of steps you need to take, first of all you need to provision the Azure Search service, this will be the scope of your capacity, billing and authentication and is fully managed through the Azure Portal or through the Management API.

When your Azure Search service is provisioned you need to define one or more indexes that are linked with your search service. An index is a searchable collection of documents and it contains multiple fields where you can query on.

Once your index is created you can define a schedule for that index to run, once every hour, x-time an hour...

When everything is configured and up and running you can start using the Azure Search service to start querying your indexed data for results.

Fit it all togheter

Below you can find a simplified example of our initial implementation. You can see that we were directly querying the blob storage and needed to fetch the attributes for each single blob file and match it with the search criteria.


This is how our current high level implementation looks like. We are using the Azure Search engine to provide both Queries and Filters to find immediately what we need.


Azure Search immediately gave us the performance we needed and was fairly easy to set up and use.

We struggled a bit finding our way round some of the query limitations and options in the basic Azure Search query options, but quickly came to the conclusion that using the Lucene query syntax provides enough rich query capabilities that we needed to search for the metadata.


Cheers, Glenn

Categories: Azure, Architecture
written by: Glenn Colpaert

Posted on Tuesday, January 16, 2018 2:55 PM

Massimo Crippa by Massimo Crippa

Azure API Management Versions and Revisions went GA. It's time to choose a version scheme and migrate a flat list of APIs into a versionset.

On January 11 the Azure API Management Versions and Revisions feature went GA. Thanks to this feature it’s now easier to manage the life cycle of your APIs and change + test your gateway configuration without runtime impact.

For more details check the announcement via this LINK

Before the introduction of this feature, it wasn’t possible to explicitly define the version strategy for an API. Therefore, every new version was approached as a new api. From the user interface point of view, that approach was resulting in a single flat list of assets. If we now use a “versionset”, we can specify how the API versions are managed (path, query, header) and group them together.

In this blog post we will see how to migrate from the flat structure (image below) to the grouped view using the ARM REST API. 

All the requests to the ARM REST API must contain the Authorization header with bearer token to secure the request. The target base path is the following{sid}/resourceGroups/{rg}/providers/Microsoft.ApiManagement/service/{tid}/


The procedure is pretty straightforward: 

  • Create a versionset
  • Update the API to join the new versionset
  • Repeat the update procedure for each APIs

Create a versionset 

First, create a versionset to group together the different versions of an API. The versionset defines the api name and the version scheme that will be applied to all the versions that belong to the versionset. In this example I choose the "path" version scheme.

  • The HTTP method to create a versionset is PUT 
  • The operation's path is: api-version-sets/{versionSetId}?api-version={apiVersionId}
  • The displayName value is the API name that will be rendered in the publisher portal, developer portal and in the swagger file produced by the gateway.

In case of successful call, you get 201 Created with the json representation of the resource created. Please note that the VersionSet will be displayed in the Azure Portal only when the first API is added to it.

Link the API to the VersionSet

Let's modify the B2B apis to join the versionset created with the previous call. To achieve that we need to update the APIs to add two new fields:

  • The "apiVersion" field with the API version value (e.g. v1)
  • The "apiVersionSetId" field with the pointer to the versionset we created with the previous step.

Because the api version number will be added by the api gateway, it's necessary to update the "path" field to remove the version from the base path.  The image below compares the json representation of the API with the changes tho be patched.

  • The HTTP method is PATCH 
  • The operation's path is: /apis/{apiId}?api-version={apiVersionId} 
  • This is a partial update so the PATCH is the method to be used. Do not use the PUT method, you will loose all the API's operations.


The HTTP 204 No Content success status response code indicates that the request has succeeded. Just refresh the Azure Portal to get the B2B API 1.0 added to the B2B versionset.


Perform the PATCH call for the second B2B API and repeat the same procedure for the B2C APIs to get to the final result.



With the latest service update it's also possible to add one or more release notes to the API version. All those change logs are shown on the the developer portal in the "releaseChanges" page (docs/services/{apiId}/releaseChanges).

Using the azure portal is possible to create a change log entry only marking a revision as “current” so let's use the the REST API to load the change log. 

  • The HTTP method is PUT
  • The operation's path is: /apis/{apiId}/releases/{releaseId}?api-version={apiVersionId} 
  • The release identifier must be specified in the request body ("id" field)
  • The link to the api to whom the release belong should be added in the properties ("apiId" field)

As result you get a 201 Created and the release note is displayed in the developer portal. 


Versions and revisions is one of the Azure API Management’s most awaited features. Thanks to the REST API you can quickly migrate your current configuration to get the most out of it.

Thanks for reading and happy API Management everyone!



Categories: API Management, Azure
written by: Massimo Crippa

Posted on Thursday, January 11, 2018 8:49 AM

Toon Vanhoutte by Toon Vanhoutte

BizTalk Server offers a great feature that both inbound (receive ports) and outbound maps (send ports) can be executed in dynamic fashion, depending on the message type of the message. This message type is defined as rootNodeNamespace#rootNodeName. Below, you can find an example of a receive port configured with several inbound maps.

When migrating parts of BizTalk solutions to Azure Logic Apps, it's really handy to reuse this pattern. This blog post explains how you can do this.

Configure the Integration Account

In this step, we will prepare the prerequisites to build this functionality.

  • Create an integration account.
  • Upload the required XSLT maps

  • Link your Logic App to the Integration Account, via the Workflow Settings:

Create the Logic App

It's time to create a Logic App that uses this functionality. In this blog, I've opted for a request/response pattern, which allows easy testing through Postman.


  • The first action initializes an Array variable. The variable contains a list of all expected message types and their corresponding transformation that must be executed.


  • The second action filter the array. It selects the object that matches the message type of the incoming message. The message type is determined through the following expression: xpath(xml(body('Transform_XML')), 'concat(namespace-uri(/*), ''#'', local-name(/*))')


  • The last action executes the mapping, of which the name is determined at runtime via this expression:body('Select_inbound_map')[0].Transform

Test the Logic App

Let's use Postman to test the Logic App and verify that the correct mapping is executed in a dynamic way.



If you combine the right Logic App actions, you can quite easily give your workflows some dynamic behaviour. In case you would like to externalize the configuration, that links message types and transforms, you could leverage for example Azure Blob Storage.

Categories: Azure
written by: Toon Vanhoutte