wiki

Codit Wiki

Loading information... Please wait.

Codit Blog

Posted on Wednesday, February 7, 2018 1:20 PM

Pim Simons by Pim Simons

During a migration from BizTalk 2006R2 to BizTalk 2016 we ran into an issue with the “ESB Remove Namespace” pipeline component. This component is available out-of-the box when the ESB Toolkit is installed and is used to remove all namespaces from a message.

After successfully completing the migration and putting the new environment in production the customer also needed to process a new message. As with the other messages received in their BizTalk environment, all of the namespaces have to be removed from the message and a new one is added. For this the customer had used the “ESB Remove Namespace” and “ESB Add Namespace” pipeline components and this setup had been successfully migrated to BizTalk 2016. For more information on these pipeline components, see: https://msdn.microsoft.com/en-us/library/ee250047(v=bts.10).aspx.


However, when the new message was received by BizTalk we received this error:

Reason: The XML Validator failed to validate.
Details: The 'nil' attribute is not declared.

It turned out the XSD of the new message has a field that can be nillable, in the message we received the field was indeed marked as nillable. The “ESB Remove Namespace” pipeline component removed the xsi namespace and prefix which caused the “XML validator” pipeline component to fail!

Changing the message content or the XSD was not an option since the XSD was created and maintained by the external party which was sending us the message. We ultimately ended up recreating the “ESB Remove Namespace” pipeline component into a custom pipeline component and modifying the code.

The “ESB Remove Namespace” pipeline component contains code to process the Attributes which contains this snippet:
if (string.Compare(inReader.Name, "xmlns", StringComparison.OrdinalIgnoreCase) != 0 && !inReader.Name.StartsWith("xmlns:", StringComparison.OrdinalIgnoreCase))
{
                writer.WriteStartAttribute("", inReader.LocalName, "");
                writer.WriteString(inReader.Value);
                writer.WriteEndAttribute();
}
 
We replaced this with:
if (inReader.LocalName.ToLower() == "nil" && inReader.NamespaceURI.ToLower() == "http://www.w3.org/2001/xmlschema-instance")
{
                writer.WriteStartAttribute(inReader.Prefix, inReader.LocalName, inReader.NamespaceURI);
                writer.WriteString(inReader.Value);
                writer.WriteEndAttribute();
}
else if (String.Compare(inReader.Name, "xmlns", StringComparison.OrdinalIgnoreCase) != 0 &&
                !inReader.Name.StartsWith("xmlns:", StringComparison.OrdinalIgnoreCase))
{
                writer.WriteStartAttribute("", inReader.LocalName, "");
                writer.WriteString(inReader.Value);
                writer.WriteEndAttribute();
}
 
Now when we receive a message containing a node that is marked as nil, the custom “ESB Remove Namespace” pipeline component handles it correctly and the message is processed.

We could not find any information about the “ESB Remove Namespace” pipeline component not supporting nillable nodes on MSDN and I find it strange that the pipeline component does not support this. To me this seems like a bug.

Categories: BizTalk
written by: Pim Simons

Posted on Tuesday, February 6, 2018 2:05 PM

Glenn Colpaert by Glenn Colpaert

Exposing data with different content types in a performant way can be a challenging problem to solve, Azure Search tackles this problem by providing a full-text search experience for web and mobile applications.

When talking about modern application development, where enterprise applications exist both on-premises and in the cloud, companies want to integrate beyond their firewall typically with SaaS based applications or APIs exposed by third parties.

Next to integrating with different services or applications, many companies also want to expose data in a simple, fast and secure way through API first development. Performance and a great experience is the key to success with your APIs. This is where Azure Search comes in...

Scenario

For a datasharing platform that we are currently building, we have a large amount of files stored on Blob Storage. These files are being ingested into the platform through different endpoints and protocols (HTTPS, FTP, AS2,...). When files are ingested in the platform, different types of metadata are added to the file before storing it to Blob Storage. These metadata values are extracted from the file content and provided as metadata. These metadata values will be made available for querying through APIs.

Our first implementation of the API was directly querying the Blob Storage to search for specific files that match our metadata filters that are provided in the API calls. We started noticing the limits of our implementation because the large amounts of Blobs inside our storage container. This is related to the limited query capabilities of Blob Storage, we needed to list the blobs and then do the filtering inside the implementation based on the metadata. 

To optimize our searches and performance we quickly introduced Azure Search into our implementation. To get things straight, Azure Search is not in charge of executing queries across all the blobs in Azure Blob Storage but more to index all the blobs in the storage account with a search layer on top of it. 

Azure Search

Azure Search is a search-as-a-service cloud solution that gives developers APIs and tools for adding a rich search experience over your content in web, mobile, and enterprise applications. This all without managing infrastructure or the need to become search experts.

To use Azure Search there are a couple of steps you need to take, first of all you need to provision the Azure Search service, this will be the scope of your capacity, billing and authentication and is fully managed through the Azure Portal or through the Management API.

When your Azure Search service is provisioned you need to define one or more indexes that are linked with your search service. An index is a searchable collection of documents and it contains multiple fields where you can query on.

Once your index is created you can define a schedule for that index to run, once every hour, x-time an hour...

When everything is configured and up and running you can start using the Azure Search service to start querying your indexed data for results.

Fit it all togheter

Below you can find a simplified example of our initial implementation. You can see that we were directly querying the blob storage and needed to fetch the attributes for each single blob file and match it with the search criteria.

 

This is how our current high level implementation looks like. We are using the Azure Search engine to provide both Queries and Filters to find immediately what we need.

Conclusion

Azure Search immediately gave us the performance we needed and was fairly easy to set up and use.

We struggled a bit finding our way round some of the query limitations and options in the basic Azure Search query options, but quickly came to the conclusion that using the Lucene query syntax provides enough rich query capabilities that we needed to search for the metadata.

 

Cheers, Glenn

Categories: Azure, Architecture
written by: Glenn Colpaert

Posted on Tuesday, January 16, 2018 2:55 PM

Massimo Crippa by Massimo Crippa

Azure API Management Versions and Revisions went GA. It's time to choose a version scheme and migrate a flat list of APIs into a versionset.

On January 11 the Azure API Management Versions and Revisions feature went GA. Thanks to this feature it’s now easier to manage the life cycle of your APIs and change + test your gateway configuration without runtime impact.

For more details check the announcement via this LINK

Before the introduction of this feature, it wasn’t possible to explicitly define the version strategy for an API. Therefore, every new version was approached as a new api. From the user interface point of view, that approach was resulting in a single flat list of assets. If we now use a “versionset”, we can specify how the API versions are managed (path, query, header) and group them together.

In this blog post we will see how to migrate from the flat structure (image below) to the grouped view using the ARM REST API. 

All the requests to the ARM REST API must contain the Authorization header with bearer token to secure the request. The target base path is the following https://management.azure.com/subscriptions/{sid}/resourceGroups/{rg}/providers/Microsoft.ApiManagement/service/{tid}/

Procedure

The procedure is pretty straightforward: 

  • Create a versionset
  • Update the API to join the new versionset
  • Repeat the update procedure for each APIs

Create a versionset 

First, create a versionset to group together the different versions of an API. The versionset defines the api name and the version scheme that will be applied to all the versions that belong to the versionset. In this example I choose the "path" version scheme.

  • The HTTP method to create a versionset is PUT 
  • The operation's path is: api-version-sets/{versionSetId}?api-version={apiVersionId}
  • The displayName value is the API name that will be rendered in the publisher portal, developer portal and in the swagger file produced by the gateway.

In case of successful call, you get 201 Created with the json representation of the resource created. Please note that the VersionSet will be displayed in the Azure Portal only when the first API is added to it.

Link the API to the VersionSet

Let's modify the B2B apis to join the versionset created with the previous call. To achieve that we need to update the APIs to add two new fields:

  • The "apiVersion" field with the API version value (e.g. v1)
  • The "apiVersionSetId" field with the pointer to the versionset we created with the previous step.

Because the api version number will be added by the api gateway, it's necessary to update the "path" field to remove the version from the base path.  The image below compares the json representation of the API with the changes tho be patched.

  • The HTTP method is PATCH 
  • The operation's path is: /apis/{apiId}?api-version={apiVersionId} 
  • This is a partial update so the PATCH is the method to be used. Do not use the PUT method, you will loose all the API's operations.

  

The HTTP 204 No Content success status response code indicates that the request has succeeded. Just refresh the Azure Portal to get the B2B API 1.0 added to the B2B versionset.

 

Perform the PATCH call for the second B2B API and repeat the same procedure for the B2C APIs to get to the final result.

 

Releases

With the latest service update it's also possible to add one or more release notes to the API version. All those change logs are shown on the the developer portal in the "releaseChanges" page (docs/services/{apiId}/releaseChanges).

Using the azure portal is possible to create a change log entry only marking a revision as “current” so let's use the the REST API to load the change log. 

  • The HTTP method is PUT
  • The operation's path is: /apis/{apiId}/releases/{releaseId}?api-version={apiVersionId} 
  • The release identifier must be specified in the request body ("id" field)
  • The link to the api to whom the release belong should be added in the properties ("apiId" field)

As result you get a 201 Created and the release note is displayed in the developer portal. 

Conclusion

Versions and revisions is one of the Azure API Management’s most awaited features. Thanks to the REST API you can quickly migrate your current configuration to get the most out of it.


Thanks for reading and happy API Management everyone!

Massimo

 

Categories: API Management, Azure
written by: Massimo Crippa

Posted on Thursday, January 11, 2018 8:49 AM

Toon Vanhoutte by Toon Vanhoutte

BizTalk Server offers a great feature that both inbound (receive ports) and outbound maps (send ports) can be executed in dynamic fashion, depending on the message type of the message. This message type is defined as rootNodeNamespace#rootNodeName. Below, you can find an example of a receive port configured with several inbound maps.

When migrating parts of BizTalk solutions to Azure Logic Apps, it's really handy to reuse this pattern. This blog post explains how you can do this.

Configure the Integration Account

In this step, we will prepare the prerequisites to build this functionality.

  • Create an integration account.
  • Upload the required XSLT maps

  • Link your Logic App to the Integration Account, via the Workflow Settings:

Create the Logic App

It's time to create a Logic App that uses this functionality. In this blog, I've opted for a request/response pattern, which allows easy testing through Postman.

 

  • The first action initializes an Array variable. The variable contains a list of all expected message types and their corresponding transformation that must be executed.

 

  • The second action filter the array. It selects the object that matches the message type of the incoming message. The message type is determined through the following expression: xpath(xml(body('Transform_XML')), 'concat(namespace-uri(/*), ''#'', local-name(/*))')

 

  • The last action executes the mapping, of which the name is determined at runtime via this expression:body('Select_inbound_map')[0].Transform

Test the Logic App

Let's use Postman to test the Logic App and verify that the correct mapping is executed in a dynamic way.

 

Conclusion

If you combine the right Logic App actions, you can quite easily give your workflows some dynamic behaviour. In case you would like to externalize the configuration, that links message types and transforms, you could leverage for example Azure Blob Storage.

Categories: Azure
written by: Toon Vanhoutte

Posted on Friday, January 5, 2018 9:04 AM

Tom Kerkhove by Tom Kerkhove

Things change, and so does the cloud. New services are being added and integration between services is being improved, but services also become deprecated. We need to embrace change and design for it.

Our industry has shifted quite a lot in the past recent years where we moved from spinning up our own servers on-premises to run our software by hosting more and more in the cloud.

This brings a lot of benefits where agility is one of them. By moving away from yearly releases to monthly or weekly releases product teams can get new features and services faster out of the door to receive feedback more easily. This is very good to quickly evolve your product and see how your consumers are using it allows you to adapt or release bug fixes more quickly.

This is exactly what Microsoft Azure and other cloud platforms are doing. Every blink of an eye they release new features! Keeping up with all latest and greatest is sometimes like drinking from a water hose, you can manage to do it but not for long! Some might say that things are even going too fast but that's a topic on its own.

The key learning of the journey I've seen so far is that things change, and you'd better be prepared.

Introduction of new services

Over time, ecosystems can expand by the addition of new services that can change the way you think about the systems that you are building or fill in the gaps that you now need to work around.

Azure Event Grid is one the newest services in Microsoft Azure that leverage unique capability - Support for sending notifications in event-driven architectures. This ties in with the recent "Serverless" trend where everything needs to be event-driven and only care about the logic that needs to run, not how it's running. Event Grid was the last piece of the puzzle to go fully event-driven which can make us question our current approach to existing systems.

Better integration between services

Another aspect of change is that services are easier to integrate with each other over time. This allows you to achieve certain aspects without having to do the heavy lifting yourself.

An example of this is Azure Logic Apps & Azure Table Storage. If you wanted to use these in the past, you had to build & deploy your own custom Table Storage API App for that because it was not there out-of-the-box. Later on, they added a connector to the connector portfolio that allows you have the same experience without having to do anything and allowed you to switch verify easily.

Azure AD Managed Service Identity (MSI) is another good example which makes authentication with Azure AD very easy, simplifying authentication with Azure Key Vault. No need to worry about storing authentication information on your compute nodes anymore, MSI will handle it for you! And while this makes it easier for you, it's also more secure since you don't have the additional risk of storing the information somewhere, it's now being handled by the ecosystem and not your problem anymore! It's not about completely removing security risks, it's about limiting them.

You've got to move it, move it.

But then comes the day that one of the services on which you depend is no longer being invested in any more or even worse, being deprecated. Next thing you know, you need to migrate to another (newer) service or, if you're very lucky, there is no migration path.

This is not a walk in the park because it comes with a lot of important questions:

  • Does it have the same feature set?
    • If not, do we need to migrate it to multiple services or look at using an offering from another vendor/community?
  • What is the new pricing story? Will it be more expensive?
  • What is the current status of the newer service? Is it stable enough (yet)?
  • How about the protocols that are being used for both the old services as for the new alternatives?
    • Does it support the same protocols or are they proprietary?
    • Can we benefit from using open standards instead?
    • Does it bring any (new) vendor lock-ins?
  • Do I have to revise my ALM story or does it follow a similar approach?
  • And many more

Unfortunately, 2017 was the year where Azure Access Control Service (ACS) was officially being deprecated and existing customers have until November 7, 2018, before the service is being shut down. This might sound like a long time before it goes away but it takes a certain amount of effort to migrate off of a service onto a new one because you need to evaluate alternatives, plan for the migration, implement changes, re-test everything and push it to the masses so it's fair to say that it takes a certain amount of time.

ACS, in particular, is an interesting case because they provide a decent migration guide and the blog post gives you guidance as well, but that does not mean that you're off the hook. While you can migrate to Azure AD or Azure AD B2C, these alternatives do not support all authentication protocols that ACS did. Luckily there are also communities that have (OSS) technology available such as IdentityServer but that's not a guarantee that it has the same capabilities than what you are migrating from.

Is ACS an exception? Certainly not. Remember Azure Remote App? Gone. Azure Power BI Embedded? Deprecated and you should migrate to Power BI.

This is far from a rant and building systems are not hard, it's maintaining them. And at a certain point in time, you need to make hard decisions which unfortunately sometimes impacts customers.

More information on Power BI Embedded can be found here as well.

Deprecated? No, but you'd better use our vNext

Next to deprecation, some services are being improved by launching a brand new major version of the service itself that has an improved version of its precursor, a service upgrade if you will. This means that the service is still around, but that it has changed so dramatically that you will need to migrate as well.

Azure Data Factory is a good example of this, which I've written about recently, where you can still use Azure Data Factory v1 but v2 has arrived and will be the way forward. This means that you can still use the service that you like but have to migrate since there are potentially a few breaking changes because the infrastructure supporting it has changed.

You can see a service upgrade a bit like a light version of a deprecated service - Your current version is going away, but you can still stick around and use the new version. If you're lucky, you don't need to migrate or use one of the provided migration tools to do this for you. However, you still need to make sure that everything still works and that you make the switch, but you get new features for doing that.

Embracing Change

There are a variety of ways that can impact the architecture of your application and we need to design for change because we will need it.

Another interesting aspect from the ACS lifecycle is that, if you've been around for a while, you might have noticed that the service didn't get any investments in the last couple of years, but neither did Azure Cloud Services. Do we need to panic? No. But it's safe to say that Cloud Services are going away in the future as well and it's always good to look around and see if there are any alternatives. Do we need to switch as soon as possible? No.

Are only old services going away? No. Thanks to Agile it is very easy to deliver an MVP and see what the feedback is, but if nobody likes it or no business need gets fulfilled it is probably going away. A good example for this is Azure BizTalk Services that was around for a year but was killed particularly fast because nobody really liked it and Azure Logic Apps is the successor of that, which people like more.

It is crucial to find a balance between cutting-edge technology & battle-tested services. Every service brings something to the table, but is it really what you need or do you just want to use a new shiny technology/service? Compare all candidates and see what benefits & trade-offs they have and use the right service for the job.

I'm proud to say that I'm still using Azure Cloud Services, despite the lack of investment by Microsoft. Why? Because it gives me what I need and there is no alternative that is similar to what Cloud Services gives for our scenario. However, this does not mean that we will use it forever and we keep an eye open for the development of other services.

When new technologies or services arise, it's always good to have a look and see what the power of it is via a small spike or POC but be cautious before you integrate it in your application. But is it worth to switch (already)? Here are a few questions you could ask yourself:

  • What does it bring over the current solution?
  • What is the performance of it?
  • What is the risk/impact of it?
  • What is the monitoring story around it?
  • What is the security story around it?
  • Can we do automated deployments?

Embrace change. Make sure that you can easily change things in your architecture without your customers knowing about them.

How? Well it always depends on your application. To give you one example - Make sure that your public API infrastructure is decoupled from your internal infrastructure and that you use DNS for everything. Azure API Management is a perfect fit for this because it decouples the consumers from the backend part giving you control over things like advanced routing, easy security, etc regardless of your physical backend. If you decide to decompose your API that is hosted on a Web App into multiple microservices running in Kubernetes or Azure Functions, you can very easily do that behind the scenes while your customers are still calling the same operations on your API Proxy.

Certainly, do this if you are working with webhooks that are being called by 3rd parties. You can ask consumers to call a new operation, which you should avoid, but with webhook registrations, you cannot. One year ago we decided that all webhooks should be routed through Azure API Management so that we benefit from the routing aspect, but also allow us to still secure our physical API since webhooks not always support security as it should.

Conclusion

This article is far from a rant, but more to create awareness that things are moving fast and we need to find a balance between cutting-edge technology & battle-tested services.

Use a change-aware mindset when designing your architecture, because you will need it. Think about the things that you depend on, but also be aware that you can only do this to a certain degree.

In my example above I talked about using Azure API Management as a customer-facing endpoint for your API infrastructure. Great! But what if that one goes away? Then you'll have to migrate everything because you can only be as cautious as you can to a certain degree because in the end, you'll need to depend on something.

Thanks for reading,

Tom.